08 Sep UW attracts speaker on network that could supplant Internet

The Internet may have become a victim of its own success.
In a seminar Tuesday evening held by the University of Wisconsin-Madison’s computer science department, a leading Princeton researcher said the commercial Internet is no longer friendly to experimentation.
Larry Peterson, chair of Princeton’s computer science department, directs PlanetLab, a distributed network of computers that, while it uses the Internet for basic information-passing, allows researchers to try out their own network programs and architectures.
“Because we no longer have access to [the Internet protocol], the core of the Internet, we’ll just bypass it and do an end-run around the problem,” he said.
The issue, he explained, is that established network protocols are so ingrained into commercial router hardware and other applications that experimenting with them on the Internet itself would require persuading companies such as Cisco to change their products — no easy task.
Because PlanetLab runs on top of the Internet instead of in its core, performance is about an order of magnitude worse, he said. It was the same way when the Internet first started: It was layered on top of the telephone network, which did not start cozying up and implementing more efficient data transmission technologies until data was already a large part of traffic.
And Peterson sees experimentation as crucial for the evolution of the network. The current incarnation of the Internet is vulnerable to a host of viruses, worms and other maladies, which new protocols and network architectures could fix. “We’ve lost our research playground to try these ideas out,” Peterson said.
Some of the applications running on PlanetLab attempt to “map” and measure the Internet, trace network failures down to specific ISPs, and trace the geographic spread of worms. Others focus on distributed storage and delivery of content.
Worldwide presence
PlanetLab allows researchers — as well as a growing number of commercial enterprises — to run programs on hundreds of computers distributed across the world. The project now includes 439 computers in 195 locations, spread among 26 countries.
Most of the computers are in the United States and Europe, but every continent except Antarctica and Africa has at least a few.
In order to gain access to the network, an institution must offer computers of their own to add to the network. The computers run PlanetLab’s customized version of Linux (now based on Red Hat 9 and soon to migrate to Fedora, the next-generation free version of Red Hat) and are administered centrally over the network. Peterson said future versions would put more control in the hands of local administrators.
The University of Wisconsin has several computers connected, he said. Google is also a partner and provides colocation assistance.
Worldwide perspective
With this emphasis on geographic distribution, the project is trying to be more than the typical distributed architecture used as a cheap supercomputer. Anyone who would be happy with a thousand CPUs in one room would probably not benefit from PlanetLab’s intent, Peterson said.
“It’s the vantage point that’s the real key, not the processors,” he said.
For example, the system’s worldwide spread is what allows it to measure how information is routed through the Internet. For example, computers on the network can use Traceroute, a program that maps the route a packet of information takes over the Internet. A large number of computers all using Traceroute together can create a map not just for single pieces of information take, but of how the network overall is routing information.
Another project, which the University of Wisconsin participates in, monitors “unused” Internet addresses that nobody doing legitimate business should send anything to.
Malicious programs that choose addresses to attack at random may accidentally choose these unused addresses, which means that monitoring them can give researchers an idea of what worms and viruses are spreading at the moment.
The wired Frankenstein
Responding to a question from the audience, Peterson admitted that malicious code run on PlanetLab itself could potentially be devestating. “This is the world’s most efficient distributed denial of service platform,” he said.
But the system has checks on it, including the ability to restrict specific users’ access to the Internet and their bandwidth. Also, the monitoring programs implemented on PlanetLab would be able to identify exactly where the attack was coming from, at least in theory.
In the future, however, especially if PlanetLab emerges from the academic world and finds commercial applications — as the Internet did — its administrators will have to deal with problems more dire than where the next grant is coming from.
“We are looking at PlanetLab as a way of influencing the Internet,” Peterson said.
Some projects using PlanetLab
ScriptRoute — measure how computers route packets to one another
“Network telescopes,” Netbait — chart the spread and behavior of malicious programs
OceanStore, LOCI, Codeen, Coral — store and deliver content using a load-balancing distributed network
Sophia, PIER — observe how a network is behaving compared to how it should behave
Internet-in-a-Slice, I3, Pluto — recreate the Internet “on top of the Internet” and experiment with routing technologies