Development agencies are increasingly making decisions and evaluating success on the basis of an ever-growing supply of data. Some argue that the proliferation of data improves development outcomes for states and people targeted by... more
Development agencies are increasingly making decisions and evaluating success on the basis of an ever-growing supply of data. Some argue that the proliferation of data improves development outcomes for states and people targeted by agencies' interventions, as well as the accountability of those agencies. Others argue that problems of selection bias, a lack of longitudinal records, and misuse of data can ignore or even exacerbate the problems that development agencies seek to mitigate. In this paper, I investigate the measurement, evaluation, and data usage of the U.S.'s Millennium Challenge Corporation [MCC] to argue that too short-term a measurement horizon can mask the true outcome of a development intervention. I use research data from an MCC-sponsored land reform in Lesotho to argue that the agency's short assessment timeframe obscured the reality of the reform. When the MCC's five-year project in Lesotho, which explicitly targeted women's land access, ended in 2013, the land reform appeared to have been a success. However, only a year later, the reality looked much different in one village. Rather than having their land access secured or enhanced by the law, women in the village were being dispossessed by real estate developers, with the assistance of government bureaucrats. MCC's short-term data and measurement of outcomes, instead of the structures, mechanisms, and vulnerabilities that determine those outcomes, concealed a significant problem with the project. This illustrates both a problem with data-driven development projects, and a possible way to improve them.