The difference between virtual reality and augmented reality is that in VR, the entire environment is manufactured. In AR, the world being broadcast is the real thing, but beings or shapes are superimposed digitally as if they really exist there. Think of the first down marker you see in a televised football game. The line looks as if it has been laid down somewhere behind the defense, but it isn’t really on the field.
So what does this have to do with the Marines? Sarnoff Research Institute, the firm that brought us the first down marker, is building a project for the Department of Defense that allows warfighters to train for incidents and altercations in the almost-real world.
The system is the first augmented reality program designed to allow soldiers — clad in a special set of goggles and vest — to train in actual locations and conditions, yet be able to do battle with enemy avatars. Terrorists, enemy soldiers, or other bad guys are superimposed over live images of the terrain a soldier occupies so that U.S. soldiers can practice operations on simulated targets without the constraints of a simulated environment, nor the expense and inherent risks of conducting a live-action drill.
The technology is being developed under the direction of Teddy Kumar, senior technical director of the vision and robotics lab at Sarnoff. Kumar worked with another Sarnoff employee in developing the technology to insert ads into the backgrounds of baseball games for the Sarnoff spinoff company, PVI. From there, PVI went on to develop the 10-yard line marker, though Kumar was not connected to that.
The embedded ad technology, first seen in 1994, was the first widespread use of computer imagery in live, real world broadcasting. The real world, however, is part of the problem. A straight line superimposed on a still field at a football game by a camera that doesn’t move around is one thing. The line doesn’t have to move and is only there for the viewer’s benefit. But soldiers have to move, react, and engage. The optics that record the real world and add digital bad guys to it have to be able to move with them while keeping those bad guys in the proper perspective.
If someone were to stand behind a small wall, for example, you would only see the part of the person that sticks out. But if you’re trying to superimpose a person standing behind the wall, you would have to know how to render it so that only the top half sticks out. Sounds simple, if you stick to immobile walls. But think about superimposing shifting parts of an avatar behind a person who is running in a simulated hostage situation. You can’t have an avatar in front of the real person, and you can’t just draw a face that floats next to him. You need a realistic enemy. One that if hit in the arm or leg would react, so that the situation could play out naturally.
The technology Kumar and his team are developing needs to incorporate goggles and a vest that registers gunfire, for that all-around sensory combat experience. Fighting avatars is only half the equation — they have to fight back in order to provide a more realistic training exercise, Kumar says. The vest provides tactile feedback that training soldiers can feel. And all of this needs to be tied into the soldier’s weapon so that when he fires the avatar feels it.
Kumar and crew have largely cracked the problems, but the navigation issue — knowing where you are and where the enemy is at all times — is still keeping them at work. Solving that is a matter of time and resolution, he says. Sarnoff will display the technology for the DOD some time in late April.
Kumar says, “I’m a scientist, not a product developer,” so he doesn’t speculate on what markets or products AR technology will create beyond the obvious — video games. For him, the fun lies in the technology itself. But if the navigation issue can be cracked, it will lead to advances in one of Kumar’s favorite areas, robotics.
Kumar, whose real first name is Rakesh, was born and raised in India, where his father was in business and his mother worked in economics. Education was important to both, but Kumar is the only one in his family with a Ph.D., which he got from the University of Massachusetts at Amherst in 1992. His bachelor’s is in electrical engineering from IIT in Kapur, India, and his master’s in computer engineering from SUNY Buffalo, and his doctorate is in computer vision. “I’ve always liked making applications happen,” he says. “I’ve always wanted to make great vision.”
His work in the field includes VideoFlight, an airport security system that allows three dozen cameras to survey large areas by creating overlays. His wife, Renee, a watercolor artist who is involved with the West Windsor-Plainsboro art scene, came up with the name.
Before he got to Sarnoff in 1993, Kumar developed vision-based inspection modules for high-density semiconductor packages used in supercomputers for IBM. Now, responsible for the vision and robotics divisions at Sarnoff, he foresees his contribution to the fields of robotics and artificial intelligence.
The navigation issue for AR is about to be a breakthrough in these two fields. “A stepping stone to R2-D2,” Kumar says. We already have robots, but they are not the intelligent androids you see in movies like “Star Wars.” They are the kind you see in movies like “The Hurt Locker.” In that movie, Army bomb disposal crews send radio-controlled robots into tight spots deemed too dangerous for humans.
But the robots do not learn their steps to or from the bomb. If they could leave “visual breadcrumbs” to chart their paths, they could navigate on their own, Kumar says, further removing people from harm’s way.
The military applications are clear, but Kumar also suggests such technology could lead to a line of robots capable of doing chores and labor. The technology is out there, he says, it’s just a matter of getting to it efficiently.
#b#Sarnoff Corporation#/b#, 201 Washington Road, Box 5300, Princeton 08543-5300; 609-734-2553; fax, 609-734-2040. Mark Clifton, acting president & CEO. Home page: www.sarnoff.com.