Internet congestion, innovation addressed

Thomas Murtagh, professor of computer science, addressed issues of internet congestion and its control in the second of his two Sigma Xi lectures, “Traffic Jams on the Information Superhighway,” on Friday.

After beginning with a review of the previous day’s lecture, Murtagh focused on the factors that make connections to the internet slow or unreliable. Addressing the Williams network, Murtagh said, “You should expect that, because the network being slow is the natural and expected thing. We are all sharing one happy little wire from here to the rest of the world.”

Murtagh explained that ideally, each computer on the Williams network should expect the bandwidth of this wire divided by the number of users sharing the connection at any one time. Unfortunately, users rarely receive this much bandwidth.

Because the internet consists of many individual networks, computers called routers are responsible for sending data between these networks.

If too many networks send data to a router at once, it is unable to pass the data along to the other networks to which it is attached.

Murtagh compared a router to an intersection, and explained that internet congestion is not unlike a traffic jam. “But there’s a big difference,” he cautioned. “When highway intersections get backed up, the cars wait out on the highway. . .You can’t have messages wait on fiber or wires. They travel along the wires at the speed of light, and they want to get off at the end.”

Backed-up messages are stored in routers’ memory until they can be sent. Unfortunately, the memory of a router is finite. When the router runs out of memory, it simply begins to erase messages.

“Picture the traffic cop walking up to your car, saying, ‘I’m sorry, there’s a traffic jam,’ and shooting you,” Murtagh said.

Murtagh explained, however, that built into the internet’s Transmission Control Protocol (TCP) is a system for dealing with data loss. Whenever a piece of data (packet) is received by its intended recipient, the recipient sends back an acknowledgement that it has received the data. Thus messages can be resent if they are not received.

The problem with this system is that it reduces the efficiency of the internet. Once more packets are sent to a router than the router can handle, it becomes necessary to resend old packets.

Old packets are sent in addition to new packets, and because the router is still clogged, both the old packets and the new packets must be sent again. The congestion begins to fuel itself, and the network becomes useless.

The key to avoiding congestion, Murtagh said, is to send messages from computer to computer at the rate of the slowest connection between them.

Although having routers tell other routers how busy they currently are may seem like a feasible solution, Murtagh said that this would be undesirable because it requires that even more packets be sent over the internet.

The way that TCP actually works is to limit the number of outstanding packets — sent packets for which no acknowledgement has been received. Every time the computer sends out a certain number of packets, it checks to see whether any have been lost. If none have been lost, the number of packets that may be outstanding at any given time is increased by one. If packets have been lost, the limit is cut in half.

The transmission rate thus resembles a zigzag pattern, as the maximum allowable number of outstanding packets increases linearly before it is cut in half.

Murtagh went on to describe an alternate approach to transmission rate regulation called TCP Vegas. TCP Vegas, rather than constantly changing its transmission rate, attempts to find an equilibrium point at which to remain.

Vegas, its creators claimed, performed between 37 and 71 percent better than TCP Reno, the protocol used on the internet today.

Although TCP Vegas is in theory a much more desirable protocol than TCP Reno, no one uses it. The reason for this, Murtagh said, is simple: although a network with every computer running Vegas will outperform a network running Reno, in a mixed environment computers running Reno will get considerably more bandwidth than those running Vegas.

“TCP Vegas is a wimp,” Murtagh said. “TCP Vegas, when things seem to be getting congested, backs off. TCP Reno, until it begins losing packets, keeps adding one.”

Since every single person on the internet cannot be made to switch to Vegas, it is not a useful solution.

The Vegas/Reno dilemma hints at a larger problem the internet may one day face: TCP is not the only protocol out there. And while TCP is a reasonably responsible electronic citizen, malicious protocols of the future might not be so kind.

For example, companies might try to create protocols that demand more bandwidth to give their users an advantage.

The final message of Murtagh’s lecture was just that this problem deserves considerable further thought: “If we do it wrong, we might be stuck with it forever.”