There have been many remarkable changes in the world over the last century, but few have surprised me as much as the transformation in public attitude toward my chosen profession, statistics — the science of uncertainty. Throughout most of my life the word “boring” was the most common adjective associated with the noun “statistics.” In the statistics courses that I have taught, stretching back almost 50 years, by far the most prevalent reason that students gave for why they were taking the course was “it’s required.”

This dreary reputation nevertheless gave rise to some small pleasures. Whenever I found myself on a plane, happily involved with a book, and my seatmate inquired, “What do you do?” I could reply, “I’m a statistician,” and confidently expect the conversation to come to an abrupt end, whereupon I could safely return to my book. This attitude began to change among professional scientists decades ago as the realization grew that statisticians were the scientific generalists of the modern information age. As Princeton’s John Tukey, an early convert from mathematics, so memorably put it, “as a statistician, I can play in everyone’s backyard.”

Statistics, as a discipline, grew out of the murk of applied probability as practiced in gambling dens to wide applicability in demography, agriculture, and the social sciences. But that was only the beginning. The rise of quantum theory made clear that even physics, that most deterministic of sciences, needed to understand uncertainty. The health professions joined in as “Evidence-Based-Medicine” became a proper noun. Prediction models combined with exit polls let us go to sleep early with little doubt about election outcomes. Economics and finance was transformed as “quants” joined the investment teams and their success made it clear that you ignore statistical rigor in devising investment schemes at your own peril.

These triumphs, as broad and wide ranging as they were, still did not capture the public attention until Nate Silver showed up and starting predicting the outcomes of sporting events with uncanny accuracy. His success at this gave him an attentive audience for his early predictions of the outcomes of elections. Talking heads and pundits would opine, using their years of experience and deeply held beliefs, but anyone who truly cared about what would happen went to FiveThirtyEight, Silver’s website, for the unvarnished truth.

After Nate Silver my life was not the same. The response to my declaration about being a statistician became “Really? That’s way cool!” The serenity of long-distance air travel was lost.

As surprising as this shift in attitudes has been, it is still more amazing to me how resistant so many are to accepting evidence as a principal component in deciding between conflicting claims. I chalk this up to three possible reasons:

1. A lack of understanding of the methods and power of the Science of Uncertainty.

2. A conflict between what is true and is wished to be true.

3. An excessive dimness of mind that prevents connecting the dots of evidence to yield a clear picture of likely outcome.

The first reason is one of my principal motivations in writing this book. The other was my own enthusiasm with this material and how much I want to share its beauty with others. . .

“The modern method is to count;

The ancient one was to guess.”

— Samuel Johnson

In the months leading up to Election Day, 2012, we were torn between two very different kinds of outcome predictions. On the one hand were partisans, usually Republicans, telling us about the imminent defeat of President Obama. They based their prognostication on experience, inside information from “experts” and talking heads from Fox News. On the other side were “the Quants” represented most visibly by Nate Silver, whose predictions were based on a broad range of polls, historical data, and statistical models. The efficacy of the former method, was attested to by snarky experts, armed with anecdotes and feigned fervor, who amplified the deeply held beliefs of their colleagues. The other side relied largely on the stark beauty of unadorned facts. Augmenting their bona fides was a history of success in predicting the outcomes of previous elections, and, perhaps even more convincing, was remarkable prior success, using the same methods, in predicting the outcome of a broad range of sporting events.

It would be easy to say that the apparent supporters of an anecdote-based approach to political prediction didn’t really believe their own hype, but were just pretending to go along to boost their own paychecks. And perhaps that cynical conclusion was often true. But how are we to interpret the behavior of major donors who continued to pour real money into what was almost surely a rat hole of failure? And what about Mitt Romney, a man of uncommon intelligence, who appeared to believe that in January of 2013, he was going to be moving into the White House? Perhaps, deep in his pragmatic and quantitative soul, he knew that the Presidency was not his destiny, but I don’t think so. I believe that he succumbed to that most natural of human tendencies, the triumph of hope over evidence.

We need not reach into the antics of America’s right wing to find examples of humanity’s frequent preference for magical thinking over empiricism; it is widespread. Renee Haynes (1906-1994), a writer and historian, introduced the useful concept of a boggle threshold; “the level at which the mind boggles when faced with some new idea.” The renowned Stanford anthropologist Tanya Luhrmann illustrates the boggle threshold with a number of examples (e.g. “A god who has a human son whom he allows to be killed is natural; a god with eight arms and a lusty sexual appetite is weird.”). I would like to borrow the term, but redefine it using her evocative phrase, as the place “where reason ends and faith begins.”

The goal of this book is to provide an illustrated toolkit to allow us to identify that line — that place beyond which evidence and reason have been abandoned — so that we can act sensibly in the face of noisy claims that lie beyond the boggle threshold.

The tools that I shall offer are drawn from the field of data science. The character of the support for claims made to the right of the boggle threshold we will call their “truthiness.”

Truthiness is a quality characterizing a “truth” that a person making an argument or assertion claims to know intuitively “from the gut” or because it “feels right” without regard to evidence, logic, intellectual examination, or facts.

— Stephen Colbert, October 17, 2005

Data science is a relatively recent term coined by Peter Naur but expanded on by statisticians Jeff Wu (in 1997) and Bill Cleveland (in 2001). They characterized data science as an extension of the science of statistics to include multidisciplinary investigations, models and methods for data, computing with data, pedagogy, tool evaluation, and theory.

The modern conception is a complex mixture of ideas and methods drawn from many related fields, among them: signal processing, mathematics, probability models, machine learning, statistical learning, computer programming, data engineering, pattern recognition and learning, visualization, uncertainty modeling, data warehousing, and high performance computing. It sounds complicated and so any attempt for even a partial mastery seems exhausting. And, indeed it is, but just as one needn’t master solid state physics to successfully operate a TV, so too one can, by understanding some basic principles of data science, be able to think like an expert and so recognize claims that are made without evidence, and by doing so banish them from any place of influence. The core of data science is, in fact, science, and the scientific method with its emphasis on only what is observable and replicable provides its very soul.

This book is meant as a primer on thinking like a data scientist. It is a series of loosely related case studies in which the principles of data science are exemplified. There are only a few such principles illustrated, but it has been my experience that these few can carry you a long way.

Truthiness, although a new word, is an old concept and has long predated science. It is so well inculcated in the human psyche that trying to banish it is a task of insuperable difficulty. The best we can hope for is to recognize that the core of truthiness’s origins lies in the reptilian portion of our brains so that we can admit its influence yet still try to curb it through the practice of logical thinking.

Escaping from the clutches of truthiness begins with one simple question. When a claim is made the first question that we ought to ask ourselves is “how can anyone know this?” And, if the answer isn’t obvious, we must ask the person who made the claim, “what evidence do you have to support it?”

#b#Chapter 2: Piano Virtuosos, 4-Minute Miles: A Dime a Dozen?#/b#

‘Virtuosos becoming a dime a dozen,” exclaimed Anthony Tommasini, chief music critic of the New York Times in his column in the arts section of that newspaper on Sunday, August 14, 2011. Tommasini described, with some awe, the remarkable increase in the number of young musicians whose technical proficiency on the piano allows them to play anything. He contrasts this with some virtuosos of the past — he singles out Rudolf Serkin as an example — who had only the technique they needed to play the music that was meaningful to them. Serkin did not perform pieces like “Prokofiev’s finger twisting Third Piano Concerto or the mighty Liszt Sonata,” although such pieces are well within the capacity of most modern virtuosos.

But why? Why have so many young pianists set “a new level of conquering the piano”? Tommasini doesn’t attempt to answer this question (although he does mention Roger Bannister in passing), so let me try.

We see an apparently unending upward spiral in remarkable levels of athletic achievement that provides a context to consider Tommasini’s implicit riddle. I don’t mean to imply that this increase in musical virtuosity is due to improved diet and exercise, or even to better coaching; although I would be the last to gainsay their possible contribution. I think a major contributor to this remarkable increase in proficiency is population size. I’ll elaborate.

The world record for running the mile has steadily improved by almost four-tenths of a second a year for the past century. When the 20th century began the record was 4:13. It took almost 50 years until Roger Bannister collapsed in exhaustion after completing a mile in just less than four minutes. In a little more than a decade his record was being surpassed by high school runners [in 1964, when Jim Ryun ran the mile in 3:59 and a year later ran 3:53.3]. And, by the end of the 20th century, Hicham El Guerrouj broke the tape at 3:43.

What happened? How could the capacity of humans to run improve so dramatically in such a relatively short time? Humans have been running for a very long time, and in the more distant past, the ability to run quickly was far more important for survival than it is today. A clue toward the answer lies in the names of the record holders. In the early part of the century the record was held by Scandinavians — Paavo Nurmi, Gunder Haaag, and Arne Andersson. Then mid-century came the Brits: Roger Bannister, John Landy, Herb Elliot, Peter Snell, and later Steve Ovett and Sebastion Coe. And in the 21st century the Africans arrived: first Filbert Bayi, then Noureddine Morceli and Hicham el Guerrouj. As elite competition began to include a wider range of runners, times improved. A runner who wins a race that is the culmination of events that winnowed the competition from a thousand to a single person is likely to be slower than one who is the best of a million.

A simple statistical model, proposed and tested in 2002 by Scott Berry, captures this idea. It posits that human running ability has not changed over the past century. That in both 1900 and 2000 the distribution of running ability of the human race is well characterized by a normal curve with the same average and the same variability. What has changed is how many people live under that curve. And so in 1900 the best miler in the world (as far as we know) was the best of a billion; in 2000 he was the best of six billion. It turns out that this simple model can accurately describe the improvements in performance of all athletic contests for which there is an objective criterion.

It does not seem far-fetched to believe the same phenomenon is taking place in other areas of endeavor. Looking over the extraordinary young pianists mentioned by Tommasini, we see names that are commonplace now, but would have seemed wildly out of place at Carnegie Hall a century ago — Lang Lang, Yundi Li, and Yuja Wang. As the reach of classical music extended into areas previously untouched, is it any surprise that among the billions of souls who call such places home we would discover some pianists of remarkable virtuosity?

Facebook Comments