What the Internet gives with one hand, it takes away with the other. On the application side, it encourages innovation by allowing anyone with a computer to create a program or website, put it on the net, and sometimes develop a huge user base nearly overnight.
But the Internet’s interstices — the networks that enable computers to talk to each other — are a closed book. “The inside of the network is traditionally dominated by companies that create networking equipment,” says Jennifer Rexford, professor of computer science at Princeton University. “The software that runs on the networking equipment is completely under their control.”
This is true even at the micro level. When purchasing a router from Cisco, for example, the hardware and software are bundled together. “You don’t have access to the software,” says Rexford. “You can’t change how the box behaves.”
In recent years, however, researchers have developed a way to circumvent the black boxes that control network communications. With the new OpenFlow standard developed at Stanford University, a small piece of software is embedded in the hardware to enable engineers to access and modify the rules that tell switches and routers how to direct network traffic, while protecting proprietary elements of a particular company’s hardware. This standard allows researchers to define data flows across the Internet using software, a process called “software-defined networking,” or SDN.
Ultimately the idea is to enable third-party companies to write software to control the network routers and connections. “It is good to have hardware that can send traffic through the Internet quickly, but we’d like also the ability to control how and which traffic goes through and which path it takes,” says Rexford. But currently network managers are “forced to configure their networks through the fairly baroque interfaces that today’s equipment vendors offer them.”
Rexford will speak on “Enabling Innovation Inside the Network,” Thursday, January 19, at 7:30 p.m. at the Small Auditorium, Room 105 of the Computer Science building at Princeton University. For more information, call Dennis Mancl at 908-582-7086 or Jan Buzydlowski at 610-902-8343, or check online at www.acm.org/chapters/princetonacm. To attend the pre-meeting dinner at Ruby Tuesday’s at 6 p.m., e-mail firstname.lastname@example.org.
Rexford outlines several ways that software-defined networking can improve the way networks function:
Better support for users who need their applications to keep running as they move, for example, from WiFi to 4G.
Improving security by controlling access and blocking malicious traffic from reaching applications.
Determining which applications get the most bandwidth.
Turning off some parts of the system to save energy.
Controlling Internet traffic. Youtube and Facebook exist in many locations, says Rexford, and these companies would like to be able to more easily control how much traffic goes to each copy and manage it in an energy-efficient way.
Those most in need of software-defined networking are companies like Google, Amazon, and Facebook that store data in massive warehouses, built close to low-cost electricity, that hold as many as 100,000 computers linked by over 10,000 network connectors.
The importance of software-defined networking grows when the network is big. “You have to automate,” says Rexford. “You can’t have one person manage every 10 or 100 computers. You need to automate how the network behaves, so you need direct control rather than software the equipment vendor wrote. The scale tends to matter. It’s more important for big networks, because you need a higher degree of automation to keep the cost down.”
The open flow standard, developed at Stanford, is getting good traction, in part because the advent of these large data centers has created a new set of companies, such as Quanta, NEC, and HP, that are now playing a role in data equipment.
These companies are in competition with the early networking giants, Cisco and Juniper, and have been more willing to adopt new ways to allow users to interact with their equipment, in part to differentiate themselves from the two giants.
Rexford’s interest is in making these new kinds of networks easier to program. “Open standards make it possible, but it doesn’t make it easy,” she says, noting the size of a network, the number of components involved, and a network’s distribution over different locations. And when the software fails, the network goes down, or worse, malicious software can worm its way into the network from outside.
In a joint project with Cornell University, Rexford and her colleagues are using a programming language called Frenetic with the goal of helping people who manage huge networks to do a better job both building and running them.
The effort is occurring at a fairly abstract level, as they look at some of the common errors or complexities that arise when one is trying to write software. To gain experience, they are building applications themselves to gain insight in what makes it hard to write applications and where it is easy to go wrong.
Other campuses have also deployed software-defined networking to allow professors and students to experiment with new ideas about networking in a realistic setting. Enterprise networks connecting various corporate sites are also interested.
Rexford’s father was a Korean linguist and intelligence officer in the Air Force, and her mother works as a high school career counselor. As a “military brat,” Rexford grew up in Korea, Hawaii, Japan, and Virginia.
She graduated from Princeton University in 1991 with a bachelor of science in electrical engineering, and earned a doctorate in electrical engineering and computer science from the University of Michigan in 1996.
Rexford became a professor in the computer science department at Princeton in 2005, after nine years working at AT&T Labs-Research.
She is co-author of the book Web Protocols and Practice (Addison-Wesley, 2001) and was the 2004 winner of ACM’s Grace Murray Hopper Award for outstanding young computer professional.
She worked for almost nine years at AT&T’s research lab, where she became interested in the challenges facing network operators. Sometimes she would even work the night shift at the operations center, she says, “to get a sense of where the pain is.” That’s where she learned the real challenge with networks.
“The problem is not making it faster or more flexible but making it easier to manage,” she says.
She worked with the network operators to help them learn to operate more effectively, but she was limited in what she could accomplish.
“We could only do things on top of the software that ran on the network equipment,” she explains. “I had my hands tied behind my back when I was trying to help the network operators do their job better.”
Rexford left AT&T to become a Princeton professor in 2005, having discovered how much she enjoyed working with summer interns during the summers at AT&T. “Having a fresh influx of really driven, enthusiastic, smart people; you get a fresh perspective every day,” she says.
The open networking summit at Stanford last October suggested there is tremendous momentum behind the new standard. “It gets at the core problem people struggle with: the cost of managing dominates the cost of equipment,” says Rexford. “One, it is expensive; and second, so many of the mistakes that happen when things break are caused by human operators who make mistakes.”
Surprisingly, the Internet was more stable on 9/11 than on a typical day, even though the parts of it in the Twin Towers were destroyed.
Why was the Internet so reliable on that day? “Network operators went home that day, and the mistakes they normally make didn’t happen,” says Rexford. “People have difficulty managing complexity in their head and get it wrong. You are holding something fairly complex together with duct tape and chewing gum. SDN doesn’t solve the problem, but it lets us think of new ways to ask the question with a new way to try them out.”