Ernest Davis

The missing component in artificial intelligence today is common-sense knowledge shared by human beings. If you see a photo of a six-foot-tall person holding a two-foot-tall person in his arms, and you are told they are father and son, you need not ask which person is the parent and which is the son. This is just one example from an article on AI cowritten by Gary Marcus and Ernest Davis, professors at New York University.

Davis, who teaches and advises graduate students at the university’s Courant Institute of Mathematical Sciences, will share his work at the Princeton’s Association for Computing Machinery (ACM) dinner meeting on Thursday, May 16, at 6 p.m. at the Grafton House in Hamilton. The cost is $30, and reservations are required. To register or learn more about this event visit princetonacm.acm.org/meetings/mtg1905.pdf. “Building AIs with Common Sense” is the group’s final meeting for the 2018-’19 season.

Davis’ work encompasses the automation of common-sense reasoning with a focus on spatial and physical reasoning. His talk will explore the question of why common-sense knowledge is critical in central AI tasks like language understanding, vision, and robotic action. The talk will survey the approaches that have been taken in building AIs with common sense, what has been accomplished, and what remains very difficult.

In a video featured on the ACM website, Davis describes common-sense knowledge as that which everybody knows about the world: how physical objects work, how people interact, and how individuals and animals think. Understanding any communication depends on knowing what the text is about, and you can’t understand the text without knowing what the world is about.

Suppose you read the sentence, “The bat ate the fly.” Is the sentence referring to a bat — the animal — or a baseball bat? Using a taxonomic approach — comparing categories and individuals, and the relations between them — you see that only animals can eat things. Bat, the animal, can eat, but a baseball bat cannot.

A key message from the video is for programmers to integrate an understanding of human common sense into the design process.

The current approach to designing AI systems is based on discerning patterns from big data. But, says Davis, if we are to build broadly intelligent machines, we need to incorporate philosophy and cognitive psychology — the mental processes involved in perception, learning, memory, and reasoning — into the research.

We need to get back to the original method of studying AI, known as knowledge engineering, Davis says. He describes it as creating a system of rules, the fundamental elements of human understanding, so that those rules could be applied to computer programs. One reason that this approach has been unappealing to researchers is the huge amount of knowledge that needs to be hand-coded — not an attractive task.

Currently commercial use of AI programs is limited to niche tasks, like machine translation, medical image processing, billing programs, or stacking boxes. Admittedly, there are remarkable success, like Google’s search engine, Google Translate, and Watson, the I.B.M. Jeopardy champion. AI is good at detecting correlations but has a problem telling us which correlations are meaningful.

For example, from 1998 to 2007 there was an increase in the number of autism diagnoses, and in that same period there was an increase in the sales of organic food. But these facts do not show that eating organic food increases the incidence of autism. The potential for making such a correlation and drawing a faulty conclusion seriously limits AI’s current ability to discern fake news.

Davis argues that research must include models of understanding and common-sense reasoning, symbolic representation, grammatical structure, and an appreciation for the intelligence humans are born with. He and Gary Marcus have coauthored a book, “Rebooting AI: Building Artificial Intelligence We Can Trust,” available in September from Pantheon.

Marcus, a professor of psychology and neural science at NYU, has written several books including the best seller “Guitar Zero: The New Musician and the Science of Learning.” He has been profiled in the Yew York Times and People Magazine. Before teaching at NYU he was the director of Uber AI Labs, and he is the founder of the machine learning startup Geometric Intelligence, which was acquired by Uber.

Davis and Marcus have written numerous popular articles on artificial intelligence, which have appeared in the New York Times, the online New Yorker, and other publications.

Davis’ father was a mathematician who worked in numerical computing. As a father and son team they compiled and edited “Mathematics, Substance and Surmise: Views on the Meaning and Ontology of Mathematics,” a collection of essays written by mathematicians, philosophers, and psychologists.

Davis earned a bachelor’s degree in mathematics from MIT. He became interested in AI in 1979 while earning his doctorate in computer science from Yale University. He had enjoyed programming and realized that AI was the hardest and deepest problem to program. “Back then I didn’t realize how hard and deep the problem was,” he says.

He has written several books, including “Representations of Common-sense Knowledge” and “Verses for the Information Age.”

In a New York Times article, “A.I. is Harder Than You Think,” Davis says your data will never match the creativity of human beings or the fluidity of the real world: “The universe of possible sentences is too complex. There is no end to the variety of life — or to the ways in which we can talk about that variety.”

Davis advises students to broaden their view of AI’s potential. Among the benefits Davis sees in integrating common-sense reasoning in computers is the help they could provide to people with disabilities.

“Big Data and deep learning are hot now, but don’t limit yourself to the big data approach,” he says. “Be open to cognitive psychology and philosophy.”

Facebook Comments