Get it out of your head. Artificial intelligence is not about to create a world of computers in higher consciousness enslaving humanity or trying to wipe out the human virus. That’s philosophy and science fiction.

The truth is artificial intelligence is a broad term dealing with robotics, computer vision, and game playing (see Sarnoff story, this issue). What #b#Sanjeev Kulkarni#/b#, professor of electrical engineering at Princeton University, is working on is machine learning — how we get computer systems and machines to learn from data.

Sounds like a simple concept, doesn’t it? Especially since that’s what computers do — take in data and react accordingly, time after time. But this process is more stimulus-response than actual learning. The computer’s actions are the result of an extremely narrow data set, like a bar code. Veer away from rigid, straight lines, and a scanner can’t read the label.

Kulkarni will present “Machines Can Learn: The Promise of Artificial Intelligence” at the Princeton Chamber on Wednesday, March 17, at 7:30 a.m. at the Nassau Club. Cost: $40. Visit www.princetonchamber.org, or call 609-924-1776.

Kulkarni enjoys a good sci-fi movie as much as anyone, but as a person who makes his living theorizing futuristic science, he knows that what we think of as artificial intelligence is nowhere near where we are or even will be for quite a while. Computers can do a great many things, he says, but their inflexibility is a major obstacle.

“People are very good at learning from data,” he says. “We’re good at recognizing objects or written letters or speech.” We interpret these signals and apply them to existing knowledge and make conscious decisions based on the situation. We hear something go bump in the middle of the night and we follow any of several paths — go check it out, call the police, ignore it, hide in the panic room, blame the cat.

Computers can’t do that. We can program them to scream and flash if the alarmed door moves, or call the cops, but computers do not take into account what moved the door. It might be an intruder or it might be the wind.

Likewise, a computer cannot read. It can recognize letters that fall within certain guidelines — just because it can read Helvetica font print doesn’t mean it could read French script — but it can’t actually read anything people don’t tell it to. Stray beyond the borders and computers fail.

#b#The robust brain#/b#. “Natural systems are very robust,” Kulkarni says. “In the human brain, if a neuron dies, or even several neurons, the brain keeps working. Computers are so delicate that you get one speck of dust in there and the whole thing shuts down.”

Building robustness into the system is one of the first things to accomplish. At present, we are nowhere near able to build a complex network of computer impulses that can fix its own trouble. Think of ants — they are precise, ordered, and disciplined. But cut off their pheromone trail and they descend into chaos. The difference is, ants know how to reset their pheromone trails and computers do not. Or, if they do, they can only do it because we have put a failsafe in place. We have told them how to get around trouble.

#b#Sensor networks#/b#. Your platoon is waiting for word to advance, and an Air Force drone has just flown over. Dozens, maybe hundreds of small devices drop out. Within seconds they link up and start reading the landscape. These sensors look for different things, record what they find, and send a signal back to your commanders. You now know that for miles ahead there is nothing to stop you. Or that you are walking into a trap about 200 yards away.

The military is pursuing sensor network technology like this, Kulkarni says. Networks that can read multiple pieces of data simultaneously and share that information. “A civilian application would be sensors embedded in a bridge,” he says. A network to detect stress and faults; a series of interconnected machines performing constant, mundane, necessary tasks.

This is the crux — and the limit for now — of machine learning. Computers that can build on information gathered around its network. The promise is the ability to remove humans from dangerous or impossible situations by having machines that can figure out what they’re seeing.

The limit is that computers will still not be able to interpret and reason what they see. They might learn that the shape before them is a cow or a tree — and could possibly be programed to discern an enemy uniform from a friendly one — but they will not know to take any action. They will simply report what they see to people who will make the decision.

#b#Top down#/b#. Though there has been tremendous progress in computer system design, there has been no fundamental shift in how we approach that design. We still build computer systems like we build everything else — from the top down. We start with a situation and build something in response to it.

Nature, on the other hand, has made entropy look easy. “In nature, things just come together,” Kulkarni says. Disparate elements work in harmony and have developed a system that keeps the whole world moving.

Our brains work the same way — billions of pieces of information from inside and outside the body filter through our brains at once, and yet the brain keeps everything working for us. We can breathe and walk and brush the dust off our jackets at the same time, not even realizing that we’ve blinked. But the brain knew we had to.

Kulkarni wonders whether this randomness effect is the approach we need in developing machines that can learn. “The closest thing to the human brain is the Internet,” he says. Random bits of information from billions of sources have somehow made themselves into a functioning system. And one that, like nature, evolves.

Kulkarni is hesitant to say that this approach is the paradigm shift needed to take artificial intelligence to the next level, but he makes a lot of sense when he talks about it. Perhaps rigid ideas about programming need to be replaced with randomness. Perhaps not. “I don’t know how it will emerge, or what raw materials it will take,” he says. “But it’ll be interesting to see materials that are not so tightly structured.”

Kulkarni is and always has been a math and science guy — almost by birthright, since his father brought the family from India when Kulkarni was about 3 years old in order to finish his mathematics Ph.D. at Indiana University. Once he got the degree, the senior Kulkarni moved the family to Upstate New York, where he taught at SUNY-Potsdam.

Kulkarni earned his bachelor’s, master’s, and Ph.D. in electrical engineering from Clarkson, Stanford, and MIT, respectively. He worked for Lincoln Laboratories, an MIT-run defense lab near Boston, and at Susquehanna, a financial firm in Bala Cynwood, Pennsylvania. After getting his Ph.D. in 1991 he came to Princeton, where he has been ever since.

Kulkarni likes the academic environment. His work is mainly in developing theories and algorithms, though at Susqhehanna he did work on systems themselves. And though he doesn’t speculate what applications will come from work like his, he expects the commercial applications will be in the areas of defense and security, public safety, and finance.

Wall Street is particularly interested in developing trading systems, he says — something that could calculate which stocks are doing well and which aren’t and could steer investors in the right direction.

But we are so far away from that — and from menacing computer overlords — that it isn’t even on Kulkarni’s radar. Even if we can build smarter machines, humans will still need to do the interpreting. “We’re not at the point where we just trust the system,” he says. “Not yet.”

Facebook Comments