Neural Networkers: Princeton researchers will use the new AI lab to attack challenges in the field just as soon as they determine what the problems are. In their office space at 1 Palmer Square, images of university buildings decorate the walls.

It’s the 21st century. So don’t be so surprised that artificial intelligence isn’t science fiction anymore. Granted, it’s not quite science fact yet, but the collaboration between Google and Princeton University is looking to smooth out that particular wrinkle.

Earlier this month, Google opened an artificial intelligence lab in Princeton with the help of two university professors, Elad Hazan and Yoram Singer. Both are professors of computer science at Princeton and both now split their time working for Google and the university.

The lab is based at 1 Palmer Square, across the street from Nassau Hall, and according to the university it is expected to “expand New Jersey’s burgeoning innovation ecosystem by building a collaborative effort to advance research in artificial intelligence.”

In addition to professors Hazan and Singer, the lab will employ several university faculty members, some graduate and undergraduate student researchers, recent graduates, and software engineers.

Singer said the lab will, ideally, give machine learning theorists at Princeton the chance to apply “real-world computing problems” to the systems, and give Google a good “long-term, unconstrained” crack at figuring out AI in a purely academic setting.

Hazan said that a primary focus of the group will be developing “efficient methods for faster training of learning machines” in order to train and build “deep neural networks.”

Yoram Singer, above, and Elad Hazan, both professors of computer science focused on machine learning,
are working for Google and Princeton University on AI.

Jennifer Rexford, chair of the Department of Computer Science at Princeton, said the new venture comes at a time of significant growth in computer science and related areas of data science at Prince­ton. “The work with Google will complement all three pillars of excellence that make data science at Princeton strong today,” she said. “A foundation in the theory and math behind computing; collaborations that are accelerating discovery across fields such as genomics, neuroscience, chemistry, psychology and sociology; and leadership, through our Center for Information Technology Policy, in the broader societal implications of computing such as bias and ethics in AI, privacy and security.”

Singer was on the research staff at AT&T in the late 1990s before becoming an associate professor at Hebrew University of Jerusalem until 2007. He then joined Google’s research team and Princeton University, as a computer science professor concentrating in machine learning.

Hazan earned his bachelor’s and master’s in computer science from Tel Aviv University in 2001 and 2002, respectively. He then got another master’s and his Ph.D. from Princeton in 2004 and 2006, respectively. He was on the research staff at IBM Almaden from 2006 to 2010, before becoming a professor at the Israel Institute of Technology. He came to Princeton University in 2016, co-founded In8 — an AI firm acquired by Google that helped pave the way for the AI lab at Princeton — in 2017.

What’s happening in AI? Earlier this month, Xinyi Chen, a former student and now colleague of Hazan’s at the lab, told the Daily Princetonian, the university’s student newspaper, “We hear about what people need in deep learning, what problems they have, what kind of trade-offs they want. With these new problems in mind, we can come up with more impactful work.”

One of the main challenges in machine learning, she said, is defining a proper problem. “It’s a lot of meetings and discussions,” she told the paper. “We explore different potential solutions.”

Cyril Zhang, who works on the AI team, told the Princetonian: “The mission of machine learning is to design agents that are able to act intelligently in changing environments” in which there is noisy or incomplete information, Zhang said. “We’re trying to tackle the most fundamental mathematical abstractions of decision-making.”

So what could go wrong? Forget the dystopian visions of a future full of malevolent Terminators for now. There will be plenty of time to hide from the robots at a later date. For now, the issues surrounding Google’s partnership with Princeton center on many of the same down-to-earth concerns any business partnership has.

First, there are concerns about conflicts of interest — the potential compromising of an academic institution by one of the wealthiest and most powerful corporations in history. Hazan, however, says partnerships like this one are essential for scientific progress.

“This is very positive, and encouraged” by parties in academia, industry, and government, Hazan told the Princetonian. He said the AI initiative is neither unique nor new to Google, and that it’s the best environment for figuring out how to get machines to learn.

Jon Ort, an opinion editor for the Daily Princetonian, isn’t buying it. On January 6, Ort’s op-ed averred that the partnership endangers academic freedoms with its potential conflicts of interest. Ort cited a 2010 case in which Google allegedly hired someone to grease the palms of its academic partners — sometimes with several thousand dollars worth of grease — in order to get more of a … partnership.

Ort also cited Google’s ongoing issues with the Federal Trade Commission, which has been waving an antitrust suit at Google for some time. He wrote: “Google’s largesse is a double-edged sword. On one hand, the company fosters legitimate and cutting edge collaboration with universities. On the other, when its dubious conduct comes into question, Google utilizes academia for its own purposes. Through its philanthropic presence, the company gains access to a vast network of legal and scientific scholars.”

A later rebuttal in the paper by Emily Carter, dean of the School of Engineering and Applied Science, argued that “many faculty members collaborate closely with a wide range of companies, not only through sponsored research but also through sabbaticals and student internships.” She admitted that well-crafted agreements “that protect our faculty members’ right to pursue whatever research they see fit and to publish their results at will” are essential.

But she also said that the university and state “have both benefited tremendously from cross-fertilization between industry and academia, starting with the industries brought to the state by Thomas Edison and Alexander Graham Bell.”

Parallels between Edison and Google could also be a double-edged sword. Many people today question the ethics of Thomas Edison, as they question those of Google. But both entities are synonymous with world-changing technology.

But what can (or should) AI do? On January 14, Molly Sharlach of Princeton’s Office of Engineering Communications, weighed in with a piece published on the university’s website. In it, she asked some fundamental questions about what intelligent machines could — or should — do.

“Should machines decide who gets a heart transplant?” she asked. “Or how long a person will stay in prison?”

These are real-world questions without easy answers. And they come with a plateful of other questions about autonomy, surrender of authority, fairness, privacy, accountability, and morality among machines.

But the other edge of that sword is that “AI technologies have the potential to help society move beyond human biases and make better use of limited resources,” Sharlach wrote.

Ed Felten, director of Princeton’s Center for Information Technology Policy (CITP), said the center is closely watching how AI evolves.

“Our vision is to take ethics seriously as a discipline, as a body of knowledge, and to try to take advantage of what humanity has understood over millennia of thinking about ethics, and apply it to emerging technologies,” Felten said.

That’s great as an idea. But the questions remain. Princeton politics professor Melissa Lane said that one major question “is whether AI systems should be designed to reproduce the current human decision patterns, even where those decision patterns are known to be profoundly affected by various biases and injustices, or should seek to achieve a greater degree of fairness.”

She followed it up with this: “But then, what is fairness?”

In other words, what are we getting ourselves into? What questions haven’t we thought up yet? What do we know and what don’t we realize that we don’t yet know. Those answers, of course, will only come from research. Which could lead us to a future brighter than we’ve imagined.

Or not.

This is not a hypothetical question. In 2017 New Jersey’s system of bail was completely overhauled. In the old system, defendants would have to post cash bail as a guarantee of their future court appearance or else go to jail until their trials. This system obviously disadvantaged poor defendants, and sometimes pressured them to plead guilty just to get out of jail. Now, nonviolent offenders, instead of paying bail, are evaluated by a risk assessment algorithm that advises judges on whether the defendants should be jailed or let go.

The reforms have led to more defendants being released and an overall 20 percent reduction in the jail population. But critics argue that despite attempts to make the algorithms unbiased, race still plays a role in the algorithm’s determinations. An investigation by ProPublica revealed that the algorithm considered the defendant’s ZIP code, educational attainment, and family history of incarceration — all of which are proxies for race.

These questions are likely to become more pressing if complex AI programs begin to replace relatively simple algorithms, which at least have clear parameters that can be debated.

Some AI researchers worry that as neural networks become more advanced, the resulting programs will become too complex to be understood even by their creators. They will, in effect, become “black boxes” that produce decisions that no one understands.

What is the point of it all? Then there is the basic question about AI: What is it for? After all, there is no shortage of human beings to perform human tasks with their human brains. Why replicate humanity with a computer?

The answer to this question was revealed earlier in January at Davos, Switzerland. There, the ruling class of the world, the billionaires and corporate bosses who plot the destiny of the economy, gathered to discuss the future. Kevin Roose of the New York Times attended this conference and listened in restaurants and storefronts where executives talked to one another out of the public eye.

Roose reported that one major reason that companies are pushing AI so hard is that they are eager to replace human workers with AI programs, which do not need to be paid, take time off, can never organize for labor rights, or do any of the other pesky things that human workers do to impede profits.

“All over the world, executives are spending billions of dollars to transform their businesses into lean, digitized, highly automated operations. They crave the fat profit margins automation can deliver, and they see AI as a golden ticket to savings, perhaps by letting them whittle departments with thousands of workers down to just a few dozen,” the Times story asserts.

According to Roose’s reporting, these executives discuss this agenda publicly only behind a smokescreen of corporate doublespeak: “Few American executives will admit wanting to get rid of human workers, a taboo in today’s age of inequality So they’ve come up with a long list of buzzwords and euphemisms to disguise their intent,” he writes. “Workers aren’t being replaced by machines, they’re being ‘released’ from onerous, repetitive tasks. Companies aren’t laying off workers, they’re ‘undergoing digital transformation.’”

Humanity Fights Back with Rocks and Guns: AI is an abstract concept that is hard to reckon with. But the physical manifestations of the technology, robots and self-driving cars, have felt the wrath of their soon to be obsolete biological forerunners. Reports from California suggest that some motorists are intentionally ramming self-driving cars there. In Arizona, where Waymo self-driving cars are being tested, and where a woman was killed by one, residents have bombarded the cars with rocks. One man was arrested for brandishing a rifle at one. One drunk man stood in front of a Waymo van, blocking its path until police came to defuse the situation.

Quoted by the Arizona Republic, Phil Simon, an information systems lecturer at Arizona State University and author of several books on technology, suggested that some of the anti-Waymo sentiment was being driven by residents who felt their economic livelihoods were threatened by the new high-tech economy that Waymo represents.

“This stuff is happening fast and a lot of people are concerned that technology is going to run them out of a job,” Simon told the paper. “There are always winners and losers, and these are probably people who are afraid and this is a way for them to fight back in some small, futile way.”

Princeton University, 1 Nassau Hall, Princeton 08544. 609-258-3000. Christopher Eisgruber, president. www.princeton.edu.

Facebook Comments