From Policy to Practice: Institutionalizing Evaluations at USAID
November 2, 2016
In a town that thrives on sound bites and photo ops, it’s rare to find an accomplishment that goes unheralded by its authors. Yet last month, the U.S. Agency for International Development (USAID) quietly released a major overhaul of its operational guidelines with little more than a tweet.
Known as the Automated Directives System (ADS), this set of policies and procedures serves as USAID’s standard playbook. Its 200-odd chapters contain the official definitions, roles, functions, and requirements that help development professionals carry out their duties in accordance with law and best practice.
Although USAID’s reestablished and revitalized policy bureau has produced an impressive array of strategies, policies, frameworks, and vision statements over the past five years, implementing them often means retrofitting existing programs to meet the new standards. By contrast, the ADS integrates these approaches into USAID’s management systems, ensuring they are “baked in” to the agency’s routine.
ADS Chapter 201 lays out the “program cycle,” which is USAID’s process for planning, delivering, assessing, and adapting programs in a country or region. The wholesale rewrite of this chapter not only weaves the concept of local ownership into every stage of the cycle, but ensures that monitoring and evaluation are planned from the very start of each program.
In addition to incorporating USAID’s 2011 Evaluation Policy, the ADS revision improves on it in several ways.
First, it emphasizes the importance of local participation in evaluations. Chapter 18.104.22.168 sets out the principle that the conduct of evaluations “will be consistent with institutional aims of local ownership through respectful engagement with all partners, including local beneficiaries and stakeholders, while leveraging and building local evaluation capacity.” A subsequent section on evaluation planning states that “stakeholders, including beneficiaries, partner country partners, implementing partners, other USAID and U.S. Government entities, should be engaged to inform the development and prioritization of evaluation questions.” This represents a strengthening of the policy and a significant advance from current practice.
Second, it focuses on evaluation quality and utilization. Meta-analyses of thequality and coverage of USAID evaluations and of their utilization for decisionmaking revealed numerous shortcomings and areas for improvement. The new ADS chapter seeks to rectify these problems with specific instructions for better evaluation planning, expanded oversight and review, mandatory response to findings, and greater transparency and dissemination of supporting data.
Third, it establishes clear expectations for when evaluations must be conducted and the amount of resources that should be devoted to them. Whereas the 2011 evaluation policy required evaluations only of large projects, the ADS now requires an evaluation of some aspect of every project, as well as a whole-of-project performance evaluation under each country strategy. The revised ADS also mandates a written justification for not conducting an impact evaluation for any new, untested approach that is anticipated to be scaled up. Most importantly, the new chapter instructs operating units to devote an average of 3 percent of total program funding to external evaluation—a goal that was previously applied to USAID as a whole, but not set as an expectation for individual missions and offices.
There are some elements still missing from the evaluation guidance, such as an explicit commitment to share evaluation findings with local stakeholders (although they are included in “relevant partners, donors, and other development actors”). And there are no directives to facilitate the conduct or funding of ex-post evaluations, which are particularly important for ensuring sustainability and country ownership. Post-project evaluations can be difficult to finance, because funding has ended for the project being reviewed, and they can be politically risky to conduct, given the lack of institutional incentives to go back two, five, or ten years later to see whether gains have endured. Despite these hurdles, USAID’s Office of Food for Peace conducted an instructive series of ex-post evaluations across four countries, which found that a project’s results at the time of exit were not a good indicator of its long-term impact.
USAID should be commended for its painstaking work to institutionalize key policy advances by incorporating them into the ADS. Overhauling the playbook is a noteworthy achievement, but true victory will be measured by what happens in the field.
Diana Ohlbaum is a senior associate with the Project on Prosperity and Development at the Center for Strategic and International Studies in Washington, D.C.
Commentary is produced by the Center for Strategic and International Studies (CSIS), a private, tax-exempt institution focusing on international public policy issues. Its research is nonpartisan and nonproprietary. CSIS does not take specific policy positions. Accordingly, all views, positions, and conclusions expressed in this publication should be understood to be solely those of the author(s).
© 2016 by the Center for Strategic and International Studies. All rights reserved.Photo Credit: ASHRAF SHAZLY/AFP/Getty Images