‘Don’t shoot the messenger” might well be Paul Decker’s motto. Decker is the new president and CEO of Mathematica Inc., the Princeton-based non-partisan social research firm that sometimes comes up with controversial findings. Mathematica’s previous CEO, Charles Metcalf, retired after 20 years, and Decker, age 46, moved into the hotseat last month. Not only does Decker run a 520-person, $120 million for-profit company, but he also must stand ready to fend off criticism from whomever might disagree with Mathematica’s results.
Based at Alexander Park, Mathematica does outcomes-based research for government agencies and foundations that need to show they are not throwing good money after bad. These clients commission rigorous studies, but sometimes the results don’t come out the way they are supposed to. Then the mud slinging and messenger shooting might begin, and the worst mudslinging sometimes comes, not from the client, but from political factions.
One of Mathematica’s more controversial reports was the “abstinence only” education study released last year. It compared programs that taught “abstinence only” with those that taught “abstain if you can, use contraception if you can’t.”
Anyone could have predicted the controversy that ensued. Even while the researchers were designing the study, they knew that if the study seemed to show that abstinence education “worked,” the liberal faction would cry foul. If it turned out that more teens in the “abstinence only” program indulged in sex than in the “use contraception” group, the conservative faction would attack the study.
As if it were a clinical trial, a large number of youth were chosen by random selection and studied over a long period of time. In 1999 more than 2,000 middle school students were randomly enrolled in four locations — Powhatan County, Virginia; Miami, Florida; Clarksdale, Mississippi; and Milwaukee, Wisconsin. About 1,200 of them got special abstinence- only education as mandated by the Title V, Section 510 funding, and about 850 received only the usual health instruction in their schools. They received from one to three years of training when they were ages 11 to 13, and in two of the four programs the students attended classes daily. Surveyed again at about age 16, they reported having similar numbers of sexual partners as those who did not attend the classes, and that they first had sex at about the same age as their control group counterparts — 14.9 years.
The abstinence study did not report a significant change in sexual behavior among the teens who did not learn about contraception. But when the expected firestorm erupted the researchers could point to the scientific nature of their method (see sidebar, page 43).
Decker’s mild response: “You will always get responses all across the spectrum. We simply stuck to the objective. The findings are pretty clear that there wasn’t a change in reported sexual behavior.” The client, the federal government, did not question the research methods. “The agency was very supportive,” says Decker.
As CEO it will be up to Decker to resist pressure from factions that are blindsided by a study that came out the “wrong” way, and the researchers frequently do come up with unexpected results. “The surprise in the policy community, in general, is how often we find that programs or interventions do not have the expected effect,” says Decker.
Disappointed clients must tough it out. Says Decker: “Federal agencies are obligated to investigate in a non-biased way what the effects of their programs are. If we feel pressured to change the tone of our findings, we fight against the pressure. We want to maintain the objectivity and integrity of our research because that is the essence of our mission.”
To forestall such controversy, Mathematica picks its clients and collaborators with caution. “We don’t want to spend a lot of time conducting research for policy advocates who already have a point of view,” says Decker. “When we come up with an objective answer they would try to wrestle us into a certain way of presenting it.”
Mathematica is a distant cousin to the market research industry (which was born, with the Gallup Poll and Opinion Research, here in Princeton). Both sets of cousins sometimes conduct telephone polls and focus groups to elicit public opinion. Both consider themselves scientists who, just like those in lab coats, test hypotheses. Mathematica likes to focus on “double blind” studies, with half the participants receiving services and the control group receiving different, or no services.
For either kind of survey science, the skill is in finding the right people to talk to. “When we want to make statements about a population, we make sure that we have a representative sample of that population,” says Decker.
“For us it is a challenge in sampling technique that plays out differently in different settings.”
But where the market researchers spend most of their time soliciting opinions, Mathematica focuses on measuring behaviors and outcomes. “We focus on outcomes that might be affected by policy interventions and use our surveys to identify those outcomes,” says Decker.
Decker has twin interests, in policy issues and in numbers, and they go back to his high school years in Jacksonville, Illinois, when debates swirled around the dinner table. “Everyone in my family was very interested in ‘policy,’” says Decker. The family values: Try to make the world a better place, and work at putting evidence together that leads to the truth.
Decker’s mother taught high school classes in government and economics, and his father, a Shakespeare scholar, taught theater and English at MacMurray College. “My grandmother also does taught government and economics, and in fact all of my grandparents were high school teachers. What was unique about my path was that I got a PhD despite not having an interest in being a college professor.”
After graduating from the College of William and Mary in 1983, he went to Johns Hopkins for his PhD in economics, emphasizing how to analyze research questions. “I wanted to get answers to specific policy research questions, as opposed to simply being interested in asking people’s opinions,” says Decker. In 1988 he joined Mathematica, which was 20 years old at that time, but still working in a young field.
Public policy research had come into vogue in the late 1960s and early ’70s, when the government was funding major evaluations and the universities were turning out legions of PhD economists who were frustrated by teaching jobs. In 1968 the just-founded Mathematica began the first social experiment in the United States to test ways of encouraging welfare recipients to work.
Over the years, sophisticated sampling and testing methods earned respect for the validity and superiority of its research, especially in long-term evaluations.
Publicly traded, Mathematica had three divisions — a systems group, a consulting group, and a social research corporation. In 1983 Martin Marietta bought it out. “We were a bunch of leftover flower children from the ’60s,” said former CEO Metcalf (U.S. 1, December 10, 1997). “We were interested in doing work on policy issues, and Mathematica was a fairly benevolent owner. Then we read in the paper that we are a small piece of a $4 billion corporation that produced MX missiles and that hardly knew we existed. It was just a mismatch.”
Two of the divisions bought themselves out. The consulting group is now known as Mathtech, a 10-person company at Exit 8A. Simultaneously the researchers formed Mathematica Policy Research (and the holding company, Mathematica Inc.) as a for-profit enterprise. “If we wanted to have some control over our destiny, we had to own ourselves,” said Metcalf.
Decker wanted to work at a public policy surveying firm, rather than on a commercial or political surveying company such as the Gallup Poll. He could have gone to such companies as Westat in suburban D.C., National Opinion Research Center (NARC) in Chicago, RTI International in the Research Triangle, and Apt Associates in Cambridge, Massachusetts.
“The main thing that sets Mathematica apart from its competitors is our ability to generate rigorous research at a high level of quality,” says Decker, explaining the financial niche: not the most expensive, but not the cheapest. Mathematica is particularly known for locating difficult to locate populations around the nation, and for its subject niches. It is still doing studies on welfare reform, but it has broadened its scope to include everything from education, disability, and employment training, to nutrition, early childhood, and health policy.
Each study must run its own obstacle course. “The toughest part of this business,” says Decker, “is identifying and implementing rigorous methods that give you the most confidence you can have in addressing a research issue.” Using “random assignment” to measure the effect of an intervention mimics the laboratory approach. It makes the methods as rigorous as possible and generates findings that stand up to as much scrutiny as possible. “As often as we can, we like to apply random assignment methods.”
When less scientific methods are used, researchers must correct for selection bias. “If you give people the choice, a select group may receive the intervention. You have to find a statistical method to correct for the differences. You test out different methods to see how stable the answer is,” says Decker. “A lot more judgment comes into play.” And there are always potentials obstacles that can stand in the way of robust research:
Getting Cooperation: Persuading school principals to relinquish their power was the challenge for Decker in the 2004 Teach for America study, a pioneering large scale education study using the random assignment method.
This $2 million, foundation-funded three-year, six-city research project involved 17 schools, 100 classrooms, and nearly 2,000 students. Children were randomly assigned to teachers in order to evaluate the Teach for America (TFA) program. Founded by a Princeton University alumna, TFA puts college graduates without traditional teacher training into classrooms in “disadvantaged” public schools. First year “Teach for America” classrooms were compared with classrooms taught by educators with a wide range of experience.
“First, we needed to come up with a system for randomly assigning students to teachers,” says Decker. “We had to figure out how to make it happen within the normal school process, but we also had to sell it to the schools.”
A principal’s initial reaction is to say “We can’t do that.” They warned that parents — hoping to get their child into the “best” teacher’s class — would fight such an arrangement.
“But over time,” says Decker, “we showed them how you can achieve the same objectives that they have, and that we were creating comparable classrooms of students. In some cases school principals were very receptive because it gave them ammunition against parents’ protests. Mathematica was ready to make changes if random selection would put a child in jeopardy.
The question studied was: “Do TFA teachers improve (or, at least, not harm) student achievement relative to what would have happened in their absence?” The study found the answer to be “Yes.”
“Yes” meant that students in TFA classrooms improved about 10 percent in math, or about one additional month of math instruction, compared to students in control group classrooms. The control group classrooms included traditionally certified, alternatively certified, and uncertified teachers.
The math improvement helped to silence the TFA critics. It also showed that teaching reading — to disadvantaged children — is more difficult than teaching math. In addition it helped educators to accept the value of rigorous, randomized research.
Nevertheless, in a “shoot the messenger” column, Jay Matthews of the Washington Post (July 13, 2004) scoffed at the study, not because it wasn’t accurate, but because he thought the gains recorded were simply not enough.
He pointed to the TFA teachers who, frustrated, struck out on their own to establish charter schools and claimed their results made the little improvements recorded by the Mathematica study look pathetic.
“Mathematica did what it was asked to do. It showed that new TFA teachers are no worse than other novice teachers, and maybe a little better. But that is not nearly good enough. I suspect that Mathematica, Teach For America, and everyone else involved in this research project agree that we are not doing what we should to make sure poverty-stricken kids have good teachers,” wrote Matthews.
Agreeing on the Results. Former Mathematica researcher David Myers had to publicly disagree with what his research partner was quoted as saying for a study on whether vouchers worked for school children in New York City. The other researcher, a Harvard professor, was quoted in the newspapers as saying they worked, whereas Myers thought that the report was less conclusive.
“I’m not saying he said that, but that’s how it was picked up in the media,” says Myers, in a phone interview. He left the firm last year to be senior vice president and director at American Institutes for Research in Washington, D.C. “I remember saying to the sponsors that these results raise more question than answers, that I wouldn’t make policy out of that study.”
Asked whether the Harvard colleague might have felt pressure to produce some results, Myers says he considers that the “no effect” result is just as important as any other outcome. “It is an important piece of knowledge.”
Dealing with the Disappointed. What happens to those in the control group, who don’t get the special services? In one well-known case, participants who were randomly selected into the control group later sued the federal labor department because they received no services.
In 1993 Labor Secretary Robert Reich was taking heat because the Job Corps training program cost $26,000 per student and had only a 14 percent completion rate. Reich commissioned a random selection study in order to get certifiably solid results. Those who were put in the control group had to find jobs on their own but could reapply to the Job Corps after three years.
In 1998 lawyers for the control group filed a class action lawsuit. One lawyer even compared the “psychological, emotional, and economic” suffering allegedly inflicted by being left out of the Job Corps program to the notorious Tuskegee study, which withheld medical treatment from black men infected with syphilis. The plaintiffs’ lawyers did not prevail. Some plaintiffs received token payments for providing information to the court, but the court awarded neither a monetary settlement nor damages. The court did tell the labor department to locate the control subjects and invite those eligible to enroll in the Job Corps.
“With random selection it is a concern, to make sure that you do not harm folks,” says Decker. “After we design a project, we get it reviewed by an independent organization, an institutional review board to be sure that the project does not have the potential for harm.” A proposed random selection nutritional study for women and children had to be canceled, for instance.
Decker points out that those in the Corps control group would not necessarily have won a spot in the Job Corps program. “Often when we apply random assignment, we do it because something is being added that is not an entitlement, and it has to be rationed in some fashion.”
When no defense will satisfy a critic, Decker points to his company’s open book policy: “One of the great things about most of our projects is that the data is available to the public, and that makes the analysis completely transparent. As in any other science, other researchers can use our data to come to their own conclusions, and publish their own findings.”
Decker has a track record of leadership at Mathematica. He joined the firm in 1988 and was made associate director in 1997. He and his wife Amy, a former social worker, have a three-year-old daughter. He was appointed executive vice president and COO in 2006, and last year he was named president, moving up to CEO on January 1.
Says former CEO Metcalf: “I had been with the company for 32 years, and its president for 21 years, and it really was time for a new breath of leadership. I found myself letting good things happen, but the last year or two I wasn’t as able to be as dynamic a leader as the company was ready for. I told everyone my final job was to make myself dispensable. Now, with a jumbling of new positions, everyone’s excited.”
Decker is younger than might be expected for the job. “It came down to, did we want someone for five years, and then Paul, or jump and go with Paul right away,” says Decker. “Paul is coming into it, not only with ability, but a lot of energy and vision.”
Decker spent six months as the executive vice president and a month as chief operating officer, with Metcalf taking a lesser role.
Having tripled in size in 10 years, Mathematica had $40 million in revenue in 1997 and is approaching $120 million and 600 employees. It expects to exceed 10 or 15 percent growth per year in its traditional markets and get extra growth in four or five areas. Decker is broadening the firm’s geographical reach and moving into new research areas. “We have always had staff interested in doing more work internationally, and I am trying to expand the opportunities for them. We will take our research — evidence-based policy making — into other countries,” he says.
For instance, the Millennium Challenge Corporation, created by the U.S. government to provide foreign assistance to developing countries, is supporting rigorous assessments of the impacts of these funds. Mathematica has been chosen to assess the results on Armenia, El Salvador, and Burkina Faso (the African country formerly known as Upper Volta).
Areas of current research include ways to facilitate consumer choice, care coordination, long-term care, health insurance coverage, children with special needs, and mental health services research.
Last summer Mathematica capped its previous collaborations with Cornell University by establishing a new Center for Studying Disability Policy and annexing the Cornell researchers. “Disability as a policy issue has really grown over the last 10 years,” says Decker. “Social Security has become a bigger investment on the part of the federal government. We have always had a strong group that worked on disability, and the policy group housed at Cornell respected our work.”
More than 30 people will focus on this research, and the center will have its own website and publication channels. It will be housed in the Washington office, which now has a total of 200 workers, including those in a subsidiary, the Center for Studying Health System Change.
Mathematica has a 30-person office in Cambridge, Massachusetts, and just opened a small office in Michigan. “The Ann Arbor office gives us a way to expand our geographical footprint. When we interview new PhDs we can offer that as a location,” says Decker.
With 100,900 square feet and 522 workers, Alexander Park is the headquarters for the three divisions: survey, research, and administrative. A 2,400-foot call center moved last year from 315 Enterprise Drive to 707 Alexander Road. The new space was formerly occupied by the U.S. Post Office’s remote encoding center.
Mathematica is known for finding creative ways to contact hard-to-find populations, but one of Decker’s challenges for this year will be the cell phone dilemma. At this point about 15 percent of adults do not have land line phones, says Decker. Because these phone-less people tend to be younger or lower-income adults, not reaching them could skew a study. Yet using cell phones is a security problem, and buying lists of cell phone numbers can be expensive.
“The client must decide whether it is worth the cost to get the extra 15 percent,” says Decker. “If you don’t do it, you decide what kind of statistical correction you make at the back end to capture the population you are missing.”
Don’t expect to see Decker being quoted pundit-style. He is much too careful with his opinions to be successful on the TV talk shows, plus he wants to keep his opinions to himself. “We have to be careful in the same way that reporters do,” says Decker. “We can’t be an advocate for a position that we say we are conducting unbiased research on. That doesn’t mean we don’t have opinions. We are all coming from very intensive research backgrounds and are focused on building up the numbers to get the right answer. Since that becomes the focus of our day to day lives, our opinions take a back seat.”
“We spend our time looking at the numbers. Pretty soon you forget the opinion and are totally focused on addressing the research hypothesis.”
Mathematica Policy Research Inc., 600 Alexander Park, Suite 100, Princeton 08540; 609-799-3535; fax, 609-799-0005. Paul Decker, president and CEO. Home page: www.mathematica-mpr.com.