Impact Evaluations

Mathematica specializes in designing and conducting impact evaluations using experimental, quasi-experimental, and nonexperimental designs. Over the past 45 years, we have customized evaluation designs so they address the needs of diverse programs and policies. Our sampling statisticians are experts in developing and implementing sample designs. Although finding an appropriate sample frame can often be challenging in developing countries, our staff work closely with local data collectors and other stakeholders to identify strategies to develop an appropriate sample frame to answer the key study questions.

We are adept at customizing random assignment procedures to adapt to the needs of local stakeholders and enhance buy-in. For example, in an evaluation of farmer training in Armenia, we conducted a random assignment of approximately 300 villages to a training intervention group or a control group using a phased-in approach. The random selection was conducted electronically in a public setting, in front of village mayors and key stakeholders, to ensure transparency and secure buy-in from all constituencies. This very public process helped location implementers adhere to the selection throughout the life of the evaluation. Similarly, for the School Dropout Prevention Pilot Program, after presenting the project to the National Ministries of Education, we gathered together school directors from the regions targeted by the project in each of the four countries where the program is being implemented to share information with them about the evaluation and to secure their commitment to participate. Schools were then assigned to intervention and control groups before the directors left the meeting. In some of the African settings in which we have worked, such as in Rwanda and Niger, we have conducted randomization by pulling village names from a bag during public ceremonies with village mayors or ministry of education representatives in attendance, an approach in which the government stakeholders were most comfortable.

When random assignment is not possible, we help our clients identify the most rigorous study design possible, given financial and programmatic considerations. For example, we have used regression discontinuity methods in several evaluations, including a conditional cash transfer evaluation in Jamaica. When random assignment or regression discontinuity designs are not feasible—such as in an evaluation of the Habitat program in Mexico, a program to improve living standards in poor urban neighborhoods—we have used propensity score matching or other similar methods to identify a relevant comparison groups.