Introduction
By 2005, Experience Corps had been helping older people and schoolchildren for a decade. The public health program, operational in 13 cities, aimed at improving the wellbeing of older adults, especially in poor and minority communities. But instead of encouraging them to exercise and eat better, it promoted health through active community engagement. Specifically, Experience Corps placed retirees in elementary schools, where they worked with children to improve literacy and other academic skills. Although the program seemed intuitive, its design—from the number of hours volunteers worked, to the number of volunteers placed in each school, to the kind of tasks they performed—was carefully defined, and based on years of scientific research.
Preliminary evaluations of Experience Corps suggested it held considerable promise. Interviews, surveys and classroom visits documented that volunteers felt better both physically and mentally, and reported a renewed sense of purpose. They weren’t the only ones to benefit. In classrooms with Experience Corps volunteers, children’s literacy scores improved and behavioral problems decreased. Chronic absenteeism fell. Both teachers and principals credited the program with transforming the school environment.
Such observational and anecdotal evidence was encouraging. But gerontologist Linda Fried, co-designer and co-founder of Experience Corps, wanted a science-based evaluation of this scientifically designed program. A rigorous study would be necessary in order to get the Experience Corps model integrated into large-scale, federally funded public health initiatives.
In 2005, Fried, who headed the Center on Aging and Health (COAH) at Johns Hopkins University, planned to approach the National Institutes of Health (NIH) for a grant to fund a five-year, randomized controlled trial (RCT) of the Baltimore chapter of Experience Corps. She and her team of researchers envisioned two sets of control groups: one for the older volunteers and one for the schools. Schools would be randomly assigned to host volunteers, or not. Volunteers would likewise be randomly assigned to participate in the high-intensity school-based program, or not. This would allow researchers to isolate Experience Corps as a variable, and determine its impact on both the volunteers and the schools.
Fried, as the grant’s principal investigator, had to weigh the options. The NIH required rigorous scientific method—would it accept COAH’s proposal if the schools were not randomized? Even if NIH accepted a different methodology, what would that methodology look like? Would the results be valid and convincing? If the researchers insisted on strict randomization, they risked losing political support, which could endanger the long-term survival of Experience Corps in Baltimore. Or GHCC could simply refuse to carry out the study altogether, and continue delivering Experience Corps without Johns Hopkins. In a series of internal meetings and in consultation with executives at GHCC, Fried and her team considered how best to proceed. But as the researchers assembled the grant application in 2005, they faced increasing dissent over methodology. Greater Homewood Community Corporation (GHCC), the community-based organization that operated Experience Corps in Baltimore and which had so far cooperated with the Johns Hopkins team, was adamantly against randomizing schools. GHCC was concerned that no school would want to be in the “control” group and that the study would harm relationships it had built over decades with individual schools and principals. What’s more, Baltimore’s mayor and City Council argued that schools with the greatest need should have priority in receiving volunteers. As the process advanced, it also became clear that locating at least one Experience Corps school in each councilmember’s district would ensure greater political support for the program and the study. These demands would preclude strict randomization.