Publication: Connecting Evaluation and Budgeting
Loading...
Date
2014-03
ISSN
Published
2014-03
Author(s)
Editor(s)
Abstract
This paper discusses how evaluation is an essential tool for good budgeting and a core element of any well-designed government wide performance budgeting system. It is organized into 5 sections: (1) Evaluation and performance budgeting- the principle outlines the role which evaluation should, in principle, play in supporting good budgeting. It identifies the key ways performance information in general supports budgeting and then outlines the way performance budgeting seeks to structure the contribution of performance information to budgeting. The section concludes by discussing the nature and role of evaluation as a key component of the performance information base for budgeting. (2) Evaluation and performance budgeting in practice reviews the actual relationship between evaluation and performance budgeting by looking at the experiences of countries that have made substantial efforts to implement both. (3) Recent Efforts to Connect Evaluation and Budgeting discusses the global financial crisis, renewed interest in the role of evaluation as a budgetary tool is increasingly apparent among OECD countries, with countries such as the United States and the Netherlands making moves in this direction. (4) How to Better Connect Evaluation and Budgeting requires two things. First, evaluation needs to be made more useful for budgeting. Second, the budget process needs to focus more on expenditure prioritization and performance. (5) Conclusions.
Link to Data Set
Citation
“Robinson, Marc. 2014. Connecting Evaluation and Budgeting. ECD Working Paper Series;No. 30. © http://hdl.handle.net/10986/18997 License: CC BY 3.0 IGO.”
Associated URLs
Associated content
Other publications in this report series
Journal
Journal Volume
Journal Issue
Collections
Related items
Showing items related by metadata.
Publication Handbook on Impact Evaluation : Quantitative Methods and Practices(World Bank, 2010)This book reviews quantitative methods and models of impact evaluation. The formal literature on impact evaluation methods and practices is large, with a few useful overviews. Yet there is a need to put the theory into practice in a hands-on fashion for practitioners. This book also details challenges and goals in other realms of evaluation, including monitoring and evaluation (M&E), operational evaluation, and mixed-methods approaches combining quantitative and qualitative analyses. This book is organized as follows. Chapter two reviews the basic issues pertaining to an evaluation of an intervention to reach certain targets and goals. It distinguishes impact evaluation from related concepts such as M&E, operational evaluation, qualitative versus quantitative evaluation, and ex-ante versus ex post impact evaluation. Chapter three focuses on the experimental design of an impact evaluation, discussing its strengths and shortcomings. Various non-experimental methods exist as well, each of which are discussed in turn through chapters four to seven. Chapter four examines matching methods, including the propensity score matching technique. Chapter five deal with double-difference methods in the context of panel data, which relax some of the assumptions on the potential sources of selection bias. Chapter six reviews the instrumental variable method, which further relaxes assumptions on self-selection. Chapter seven examines regression discontinuity and pipeline methods, which exploit the design of the program itself as potential sources of identification of program impacts. Specifically, chapter eight presents a discussion of how distributional impacts of programs can be measured, including new techniques related to quantile regression. Chapter nine discusses structural approaches to program evaluation, including economic models that can lay the groundwork for estimating direct and indirect effects of a program. Finally, chapter ten discusses the strengths and weaknesses of experimental and non-experimental methods and also highlights the usefulness of impact evaluation tools in policy making.Publication Evaluating the Impact of Development Projects on Poverty : A Handbook for Practitioners(Washington, DC: World Bank, 2000-05)Very little is known about the actual impact of projects on the poor. Many are reluctant to carry out impact evaluations because they are deemed expensive, time consuming, and technically complex, and because the findings can be politically sensitive. Yet a rigorous evaluation can be powerful in assessing the appropriateness and effectiveness of programs. Evaluating impact is particularly critical in developing where resources are scarce and every dollar spent should aim to maximize its impact on poverty reduction. This handbook seeks to provide project managers and policy analysts with the tools needed for evaluating project impact. It is aimed at readers with a general knowledge of statistics. Chapter 1 presents an overview of concepts and methods, Chapter 2 discusses key steps and related issues to consider in implementation, Chapter 3 illustrates various analytical techniques through a case study, and Chapter 4 includes a discussion of lessons that have been reviewed for this handbook. The case studies, included in Annex I, were selected from a range of evaluations carried out by the Bank, other donor agencies, research institutions, and private consulting firms. Also included in the annexes are samples of the main components that would be necessary in planning any impact evaluation - sample terms of reference, a budget, impact indicators, a log frame, and a matrix of analysis.Publication Planning, Monitoring, and Evaluation : Methods and Tools for Poverty and Inequality Reduction Programs(World Bank, Washington, DC, 2013-01)As the author enter the second decade of the twenty-first century, governments, international organizations, nongovernmental organizations (NGOs), philanthropic organizations, and civil society groups worldwide are actively focusing on evidence-based policy and increased accountability to stakeholders (results agenda).The widespread implementation of the Results Agenda has generated a plethora of books, guides, academic papers, trainings, and case studies, which has enabled an ongoing maturation process in the field. Consequently, specialists are now better equipped to understand what works under which circumstances. Broadly speaking there are two interrelated questions which must be answered when assessing the sustainability of a government results agenda. First, is the institutional design and practice of government conducive to evidence-based policy making? Second, are the overarching monitoring and evaluation (M&E) methods and specific tools used appropriate for garnering the evidence demanded by government? These series of notes aim to make a small contribution to the latter question by summarizing and highlighting a selection of PM&E methods and the tools that governments and international organizations around the world have developed to put these into practice in their own contexts. The central goal of this initiative is to prompt a process of learning, reflection and action by providing practical information to those whose leadership role requires them to understand PM&E methods and their potential for enhancing evidence-based policy making.Publication Impact Evaluation in Practice, First Edition(World Bank, 2011)The Impact Evaluation in Practice handbook is a comprehensive and accessible introduction to impact evaluation for policymakers and development practitioners. The book incorporates real-world examples to present practical guidelines for designing and implementing evaluations. Readers will gain an understanding of the uses of impact evaluation and the best ways to use evaluations to design policies and programs that are based on evidence of what works most effectively. The handbook is divided into three sections: Part One discusses what to evaluate and why; Part Two outlines the theoretical underpinnings of impact evaluation; and Part Three examines how to implement an evaluation. Case studies illustrate different methods for carrying out impact evaluations.Publication Reconstructing Baseline Data for Impact Evaluation and Results Measurement(World Bank, Washington, DC, 2010-11)Many international development agencies and some national governments base future budget planning and policy decisions on a systematic assessment of the projects and programs in which they have already invested. Results are assessed through Mid-Term Reviews (MTRs), Implementation Completion Reports (ICRs), or through more rigorous impact evaluations (IE), all of which require the collection of baseline data before the project or program begins. The baseline is compared with the MTR, ICR, or the posttest IE measurement to estimate changes in the indicators used to measure performance, outcomes, or impacts. However, it is often the case that a baseline study is not conducted, seriously limiting the possibility of producing a rigorous assessment of project outcomes and impacts. This note discusses the reasons why baseline studies are often not conducted, even when they are included in the project design and funds have been approved, and describe strategies that can be used to 'reconstruct' baseline data at a later stage in the project or program cycle.
Users also downloaded
Showing related downloaded files
Error: Could not load results for 'https://openknowledge.worldbank.org/server/api/item/relateditemlistconfigs/a68e7359-f9b5-5efd-94b2-49f2c7cbf73f_downloads/itemlist'.