Innovation Policy Learning from Korea: The Case of Monitoring and Evaluation (M&E) © 2024 International Bank for Reconstruction and Development / The World Bank 1818 H Street NW Washington DC 20433 Telephone: 202-473-1000 Internet: www.worldbank.org This work is a product of the staff of The World Bank with external contributions. The findings, interpretations, and conclusions expressed in this work do not necessarily reflect the views of The World Bank, its Board of Executive Directors, or the governments they represent. The World Bank does not guarantee the accuracy, completeness, or currency of the data included in this work and does not assume responsibility for any errors, omissions, or discrepancies in the information, or liability with respect to the use of or failure to use the information, methods, processes, or conclusions set forth. The boundaries, colors, denominations, and other information shown on any map in this work do not imply any judgment on the part of The World Bank concerning the legal status of any territory or the endorsement or acceptance of such boundaries. Nothing herein shall constitute or be construed or considered to be a limitation upon or waiver of the privileges and immunities of The World Bank, all of which are specifically reserved. Rights and Permissions The material in this work is subject to copyright. Because The World Bank encourages dissemination of its knowledge, this work may be reproduced, in whole or in part, for non- commercial purposes as long as full attribution to this work is given. Any queries on rights and licenses, including subsidiary rights, should be addressed to World Bank Publications, The World Bank Group, 1818 H Street NW, Washington, DC 20433, USA; fax: 202-522- 2625; e-mail: pubrights@worldbank.org. World Bank Finance, Competitiveness and Innovation Global Practice Innovation Policy Learning from Korea: the Case of Monitoring and Evaluation (M&E) MAY 2024 Acknowledgments This document was prepared by a team led by Yanchao Li (Private Sector Specialist) and Jaime Frias (Senior Economist / Task Team Leader), which comprised Kyeyoung Shin (Consultant) and Juan Rogers (Consultant) from the Finance, Competitiveness, and Innovation Global Practice of the World Bank (WB). The authors are grateful for useful comments from WB peers Xavier Ciera (Senior Economist), Anwar Aridi (Senior Private Sector Specialist), Joo Sueb Lee (Senior Economist), Justin Hill (Senior Private Sector Specialist), Yehia Eldozdar (Monitoring & Evaluation Special ist) and Jiyoung Choi (Senior Economist). The team appreciates the guidance of Cecile Niang (Practice Manager, FCI), Zafer Mustafaoglu (Practice Manager, FCI) and Denis Medvedev (Director, IFC). The team are grateful for the insights offered by leading experts in the subject area, including Dr. Jae-Jin Kim (President of the Korea Institute of Public Finance, KIPF), Dr. Chi-Ung Song (Senior Fellow of the Science and Technology Policy Institute, STEPI), Professor Dr. Jakob Edler (Executive Director, Fraunhofer Institute for Systems and Innovation Research ISI / Professor of Innovation Policy, Manchester Institute of Innovation Research), and Michael Keenan (Senior Policy Analyst, Organization for Economic Co-operation and Development, OECD). The team benefited from feedback to the case through policy discussions with the Department of Trade and Industry of the Philippines, the National Economic Development Authority of the Philippines, the Department of Science and Technology of the Philippines, the Ministry of National Development Planning of the Republic of Indonesia, and the Ministry of Science and Technology of Viet Nam. The team thanks Arnelyn Abdon (Consultant), Daein Kang (Consultant), Grace Morella (Consultant), Kristiana Torres (Assistant), Adela Antic (Consultant) and Kibum Kim (Private Sector Specialist) for facilitating the policy discussions with experts and government representatives from the Philippines, Indonesia, and Viet Nam. The authors are also grateful to Marcy Gessel (Editor) and Zoe Escobar (Assistant) for editorial support. This case study was supported by the national government of the Republic of Korea through the Korea-World Bank Partnership Facility (KWPF). 4 Innovation policy Learning from Korea: the case of monitoring and evaluation (M&E) Contents Acknowledgments.........................................................................................................4 Abbreviations and Acronyms..........................................................................................6 Executive Summary........................................................................................................7 01. Introduction........................................................................................................... 10 02. International Good Practice in Monitoring and Evaluation of Innovation Policy.............21 03. M&E of innovation Policy in Korea............................................................................ 32 04. Lessons and Takeaways for Developing Countries...................................................... 59 References..................................................................................................................69 Appendices................................................................................................................. 75 5 Innovation policy Learning from Korea: the case of monitoring and evaluation (M&E) Abbreviations and Acronyms ANII Agencia Nacional de Investigación e Innovación (National Innovation Agency of Uruguay) ICT Information and communication technology ISA Innovation and Science Australia KDI Korea Development Institute KIPF Korea Institute of Public Finance KISTEP Korea Institute of Science and Technology Evaluation and Planning KISTI Korea Institute of Science and Technology Information M&E Monitoring and Evaluation MOEF Ministry of Economy and Finance of the Republic of Korea MOSF Ministry of Strategy and Finance of the Republic of Korea MSIT Ministry of Science and ICT of the Republic of Korea NABO Korean National Assembly Budget Office NTIS National Science and Technology Information Service PACST Presidential Advisory Council on Science and Technology of the Republic of Korea PART Program Assessment Rating Tool PER Policy Effectiveness Review R&D Research and Development RCT Randomized Controlled Trial RDTI Research and Development (R&D) Tax Incentive STI Science, Technology, and Innovation TAFTIE The Association for Technology Implementation in Europe (European Network of Innovation Agencies) 6 Innovation policy Learning from Korea: the case of monitoring and evaluation (M&E) Executive Summary This case study belongs to the series produced under the World Bank project “Innovation Policy Learning from Korea”. The project seeks to promote better innovation policy design and execu- tion in East Asian client countries through knowledge transfer and capacity building by drawing on relevant innovation policy experience from the Republic of Korea. As a relatively newly industrialized economy, Korea is uniquely positioned to offer valuable and timely lessons for developing countries that desire to strengthen innovation policy and ascend the “capability escalator”.1 In the 1960s, Korea’s gross domestic product (GDP) per capita was comparable to those of the least developed countries in Africa and Asia; by 2023, Korea was ranked first place worldwide on the Bloomberg Innovation Index. Throughout Korea’s remarkable industrialization, key emphases have been placed on technology and innovation. From the initial “imitative innovation” to the present, when Korea performs as a front-runner in many high-tech areas, the country has effectively and proactively promoted business growth and industrial upgrading with the leverage offered by various policy interventions. The country deployed those interventions to support research and development (R&D) and non-R&D innovation. As the country went through various stages of development, a clear policy trajectory toward greater sophistication emerged, founded on learning and adaptation (Frias, and Lee, 2021). The use and gradual improvement of an effective monitoring and evaluation (M&E) mechanism for innovation policy supported that transition. M&E is considered a necessary function of modern management systems, given that it addresses the requirements of accountability on one hand and the need to determine the results of implemented policies and learning outcomes on the other. In the field of innovation policy, M&E is particularly critical to enable policy learning and adaptation policy learning and adaption given the uncertain nature of innovation processes and outcomes. Innovation policy is seldom homogeneous; public agencies deploy a wide range of policy instruments to support the various stages and processes of innovation. M&E of innovation policy interventions tends to be complex owing to a range of issues on both the governance and technical levels. 1 Cirera and Maloney (2017) noted that the set of policy instruments that can support the building of innovation capabilities at each stage differs. The recommended process is to gradually deploy instruments of increasing complexity to reduce the demands on government capabilities to a manageable level. 7 Innovation policy Learning from Korea: the case of monitoring and evaluation (M&E) Some challenges are associated with the M&E of innovation policy per se, and some challenges are specific to conducting M&E in developing countries. For the former, key challenges include the diversity and complexity of innovation policy interventions, uncertainty due to long terms of maturity for results, difficulty in disentangling results from knowledge spillovers, and limited observability of outcomes. In developing countries, typical M&E challenges include lack of leadership commitment to learn, which hinders the adoption of M&E systems; low priority of business innovation policy relative to other policy domains; difficulty in coordinating between ministries and agencies to streamline M&E of innovation policy; lack of organizational capacity and expertise to design and deploy M&E and to use M&E for the purpose of learning; and limited resources for M&E infrastructure and capacity building. Korea has managed relatively well—and increasingly well—to address the previously mentioned challenges throughout its process of catching up by using an innovation-driven growth model. The Korea experience presents an innovation policy M&E mechanism that has kept learning, evolving, and adapting to increasingly sophisticated policy practices and that, over the decades, has responded dynamically to the needs of business innovation in the country. The analysis presented in this case study demonstrates that Korea is a practical example for developing countries not only because of how successfully it has managed to achieve certain M&E objectives but also because of its “imperfection” and how it operated in a reality with limitations and constraints. Reviews of Korea’s experience and international practices revealed that those challenges—and the principles and approaches to addressing those challenges—fall into three main categories: (1) governance, (2) data and methods, and (3) capacity and resources. Korea’s experience sheds light on all three categories of challenges; in particular, it offers useful takeaways for developing countries (including Indonesia, the Philippines, and Viet Nam) on how to address challenges, especially those in governance and in capacity and resources. Key takeaways from the Korea experience include the following: Governance: • Korea’s well-articulated use of mandated M&E frameworks can be instructive for client countries. The legal basis requires the use of M&E frameworks, defines roles and rules explicitly, and delegates authority to promote not only accountability but also autonomy. • Korea’s five-year master plans have been instrumental in ensuring a holistic and long-term approach to M&E of innovation policy. Korean five-year master plans are developed to provide overall guidance on M&E of innovation policy, and detailed M&E plans are conceived at the ministerial level based on those master plans, which ensures that a long-term perspective is established during policy formation. • Korea has found a balance in dividing the labor between R&D and non-R&D innovation policies among two major ministries—the Ministry of Science and ICT (MSIT) and the Ministry of Economy and Finance (MOEF)—an approach which is well aligned with its overall structure of innovation policy making and implementation. • Korea has benefited greatly from a strong political drive and major investments in promoting M&E and innovation policy from the very top-level leadership. The role of the Presidential Advisory Council on Science and Technology (PACST) was crucial in mandating the M&E of innovation policy in practice, and regular high-level meetings presided by the president rovided real coordinating power. 8 Innovation policy Learning from Korea: the case of monitoring and evaluation (M&E) Data and methods • The Korea National Science & Technology Information Service (NTIS) is an example of a vehicle to ease the access to information and data for M&E, particularly those for innovation policy, which often are managed by several entities. NTIS serves as a one-stop shop for M&E information and data that is open not only to government officials but also to the public, including researchers, academics, and students. • In Korea, the qualification of program origins, including building its case and processing and analyzing M&E data, is closely supported by specialized research institutions with expertise in M&E. • For some evaluations, Korean laws provide autonomy and flexibility by deliberately not imposing specific evaluation methods. In such cases, only areas of evaluation are specified, leaving the method of evaluation to the discretion of the entity charged with undertaking it. Capacity and resources • Korea offers useful experience in fusing the requirements of staff training to the M&E of innovation policy itself. The supporting agencies offer mandatory and optional training programs specifically designed to strengthen government officials’ capabilities for M&E in addition to providing operational guidelines. • Korea’s approach to enhancing coordination capacity among policy makers is firmly based on the legal framework. Coordination not only helps in achieving efficiencies by avoiding duplication of effort but also benefits the effort to develop useful, relevant, and accessible data because the common definitions and architectures are achievable only in the context of such coordination. • Korea has leveraged another important factor in ensuring coordination and implementation: budgeting. MSIT and MOEF, the executive bodies of the M&E of R&D and non-R&D, have the power to allocate the budget based on the results of M&E. Important to note is that budget allocation is not solely based on performance evaluation results because not all programs can be monitored and evaluated on an equal footing. • Korea’s expertise in carrying out M&E of innovation policy was supported by research institutes specializing in M&E and by academia, making it easier to ensure that the leading- edge experts in the country are involved in the process. This case study demonstrates clearly that good practice of M&E is never merely a technical matter. It is also a governance matter because the requirement of and support for M&E follow directly from the governance basis on which it will be built. Korea’s M&E system is the result of decades of adjustment and refinement. M&E schemes for innovation policy require constant adjustments and improvements due to the fast-changing, multifaceted nature of innovation activities. Developing an M&E system such as Korea’s or adopting certain features of it may necessitate a change in culture, organizational routines, and individual behavior. Undertaking the steps toward more adequate M&E to suit country-specific needs in innovation policy requires prioritization driven by the readiness of a country’s governance situation, capacity and resource levels, and technical capabilities. To an extent, developing countries could anticipate and be prepared for certain challenges in improving their M&E schemes by referring to the Korean experience. The challenges that Korea has experienced could be circumvented if developing countries implement countermeasures early on. 9 Innovation policy Learning from Korea: the case of monitoring and evaluation (M&E) 01 Introduction 10 Innovation policy Learning from Korea: the case of monitoring and evaluation (M&E) Motivation This case study is one of the deliverables produced under the World Bank project “Innovation Policy Learning from Korea.” The project aims to promote better innovation policy design and execution in East Asian client countries through knowledge transfer and capacity building by drawing on relevant innovation policy experience from the Republic of Korea. As a relatively newly industrialized economy, Korea is uniquely positioned to offer valuable and timely lessons for developing countries that desire to strengthen innovation policy and ascend the “capability escalator.”2 In the 1960s, Korea’s gross domestic product (GDP) per capita was comparable to those of the least developed countries in Africa and Asia; by 2021, Korea was ranked first place worldwide on the Bloomberg Innovation Index. During those 60 years, Korea has undergone extraordinary economic growth and global integration, becoming one of the strongest recently industrialized economies in the world. Throughout Korea’s remarkable industrialization, key emphases have been placed on technology and innovation. From the initial “imitative innovation” to the present, in which Korea performs as a front-runner in many high-tech areas, the country has effectively and proactively promoted business growth and industrial upgrading with the leverage offered by various policy interventions. The country deployed those interventions to support research and development (R&D) and non- R&D innovation. As the country went through various stages of development, a clear policy trajectory toward greater sophistication emerged, founded on learning and adaptation (Frias and Lee, 2021). The use and gradual improvement of an effective monitoring and evaluation (M&E) mechanism for innovation policy supports that foundation and the economic structure it fosters. Moreover, Korea has been active in spreading its development know-how to support aspiring developing countries toward modernization, industrialization, and innovation-driven growth. That openness is favorable to developing countries ready to draw on Korea’s development experience 2 Cirera and Maloney (2017) noted that the set of policy instruments that can support the building of innovation capabilities at each stage differs. The recommended process is to gradually deploy instruments of increasing complexity to reduce the demands on government capabilities to a manageable level. 11 Innovation policy Learning from Korea: the case of monitoring and evaluation (M&E) in various aspects relevant to an industrializing country. Among those countries are the three East Asia and Pacific countries identified in this project: Indonesia, the Philippines, and Viet Nam.3 Those countries, although faced with characteristically individual development challenges, are relatively ready to absorb the innovation experience offered by Korea. Those countries recently graduated from the ranks of middle-income countries to join the upper-middle-income countries. Nevertheless, such equitable growth is challenging to sustain as those countries move toward the higher-middle-income level. They need transformation and upgrading though innovation to unlock further growth potential and overcome the “middle-income trap.” In that context, Korea’s experience of transforming its economy toward innovation-driven growth could offer practical lessons for those three countries and beyond. Why the Focus on M&E? When the authors considered the various Korean experiences in pursuit of case studies of innovation policy to distill lessons for developing countries, they found that the topic of M&E— and of innovation policy capabilities in general—stood out as one of the most frequently cited necessities by experts in the field. M&E is considered a necessary function of modern management systems, given that it addresses the requirements of accountability on one hand and the need to determine the results of implemented policies on the other (OECD, 2017). The public sector is increasingly pressured to deliver better results through limited resources while adhering to overarching principles of transparency, accountability, and efficiency. That function requires a high level of fluency in evidence-based policy making founded on institutional learning. From the national, regional, and local levels of government to specialized public organizations in education, health care, and beyond—and further, past national borders to international agencies such as the United Nations, Organisation for Economic Co-operation and Development (OECD), World Bank, and European Commission—policy makers and program owners are keen to understand whether and how their interventions work. The goal is to assess the efficiency and effectiveness of interventions—not neglecting unintended outcomes—and, ultimately, to understand how to strengthen interventions by studying those already implemented. M&E not only promotes transparency, accountability, and progress tracking but also facilitates policy learning, generates actionable information for timely policy adjustments and improvements, and enhances policy effectiveness. 3 The need for reform and demand for support were considered in selecting target client countries. Those criteria were primarily identified by referring to the findings from the World Bank’s previous advisory engagements with developing countries; the policy effectiveness reviews (PERs) for innovation policy conducted in each of the client countries were the most valuable resource. Members of the project team also visited Vietnam and the Philippines and held interviews and consultations with the countries’ policy makers to gather demand assessments from August through September of 2019. Consideration of synergies with completed, ongoing, and scheduled World Bank projects, both lending and nonlending, also guided the client country selection process, with the intent to maximize impact. 12 Innovation policy Learning from Korea: the case of monitoring and evaluation (M&E) In the field of innovation policy, M&E is particularly critical to the enablement of policy learning and adaptation given the uncertain nature of innovation processes and outcomes (Edler et al. 2016). Innovation policy is seldom homogeneous; public agencies deploy a wide range of policy instruments to support the various stages and processes of innovation (see, for example, Cirera et al. 2020 as a detailed guidebook to the design of innovation policies). Innovation policy is a term often used to refer to the overall field of science, technology, and innovation (STI) policy, which includes not only the generation and diffusion of new solutions but also the generation and diffusion of scientific knowledge pertaining to and the outcomes of basic and applied research. This case study is concerned with business innovation.4 Ideally, policy interventions, especially those supporting innovation (given the high level of uncertainty), should be based on the use of policy pilots requiring rigorous and built-in M&E to determine whether the results warrant scale-up. The technological and policy trajectories are difficult to define at the initial stage of policy design and formulation; therefore, a crucial step is to set up an effective M&E mechanism that monitors a given policy’s progress—and evaluates it in a timely fashion to enable for policy learning and course correction. M&E therefore permits the adjustment of deployed policy to improve its efficacy. Developing countries with aspirations to catch up through innovation-driven development will find M&E indispensable to the core capabilities with which the governments of advanced economies equip themselves. By establishing M&E mechanisms and mandating their use, the government of a developing country can begin to build accountability and efficiency, gather and manage growing data and evidence, gain knowledge about the innovation policy progress made, obtain buy-in from stakeholders to justify interventions, and, ultimately, realize policy learning and adaptation so that future decisions to intervene (or not) may be traced to evidence rather than to wishful thinking. The M&E mechanism for innovation policy in Korea involves a high level of cross-agency coordination, the success of which underpins Korea’s increasingly strong national innovation system and its performance. When assessing the learning needs in developing countries, the authors found high interest among innovation policy practitioners in diving deeper to learn the Korean experience of innovation policy M&E. M&E is one of the areas more poorly implemented in developing countries, as suggested by findings from policy effectiveness reviews (PERs); hence, diffusion of good practices that have proven to be useful in Korea is necessary to draw wider implications for learning and transfer. 4 “Innovation relates to the ability to introduce a new product, a new idea, a new technology, or a new solution. As such, innovation includes basic upgrading, but also the invention of new products and technologies” (Cirera et al. 2020) 13 Innovation policy Learning from Korea: the case of monitoring and evaluation (M&E) What is M&E, and How Does It Relate to Innovation Policy Learning? M&E provides the evaluation mechanism that collates information and findings with respect to how well an intervention is performing and whether it is making reasonable progress toward its objectives at specified milestones. Although often used in combination, monitoring and evaluation are intimately related, though nevertheless distinct: • Monitoring is observational by nature. Monitoring is addressed at the operational or management level of policy interventions or programs to understand whether a program is being correctly implemented. It is typically done internally, to regularly collect data about project inputs, activities, and outputs, the participation of intended beneficiaries, and the beneficiaries’ satisfaction with the program. Monitoring also includes reporting and documentation, finances, and budgets. • Evaluation is judgmental by nature. Evaluation is addressed at a more strategic level to identify the effect, and its magnitude, caused by a given innovation policy’s implementation. 5 Ideally, evaluation should offer counterfactual scenarios to address the question of “what if the intervention did not happen?” In practice, however, experimental, or controlled- setting evaluations are not always doable; hence, counterfactual scenarios are difficult to establish. Although evaluation could be done internally, it is more often carried out by external, independent bodies to preserve objectivity. Best practices often involve a combination of internal and external roles in evaluation. Evaluators analyze the data collected from monitoring and from other sources and determine whether the intervention has achieved its stated objectives. The desired impact of innovation policy depends on choosing the correct solutions that address market and system failures that hinder innovation and implementing those solutions efficiently and effectively. Even the correct policy solution—one that has, in practice, proved to be effective at addressing a market failure—will not deliver those results if its implementation is flawed. Without M&E, policy makers would be unable to determine whether lackluster results were caused by an incorrect selection among policy tools or the suboptimal implementation of the correct policy. Monitoring is a continuous activity, whereas evaluation tends to be a periodic activity— occurring, for example, ex ante, midterm, at the end of the active period of a policy, or sometime after the policy’s termination (as in the case of impact evaluation). Figure 1.1 illustrates how both activities are situated along stages of the policy process. At the policy formulation stage, ex ante evaluation6 (often in the form of feasibility studies) can be adopted to comprehend the feasibility of the innovation policy and to facilitate planning for the intervention, including budget planning. Upon adoption of the policy, a solid monitoring mechanism supported by regular and 5 Impact evaluation can be defined as “measuring the change in outcomes for those affected by the program compared to the alternative outcomes had the program not existed” (Gugerty and Karlan 2018, based on Glennerester and Takavarasha 2013. 14 Innovation policy Learning from Korea: the case of monitoring and evaluation (M&E) systematic data collection is crucial. Interim evaluation provides periodic assessment of how well the intervention is performing, and ex post impact evaluations take place after the policy intervention concludes, thus giving time for the effects of policies to manifest. Evaluations are increasingly being designed before interventions are implemented, as that is a better time for constructing evaluations than the moment they are needed—especially in the case of evaluations based on experimental methods, such as randomized controlled trials (RCTs) that are designed contemporaneously with the intervention itself. FIGURE 1.1 Monitoring And Evaluation (M&E) Embedded In The Policy Process Policy Process • Inception, or • Authorization • Design • Implementation • Closure and Decisions • Coordination & Final Reporting • Reformulation • Intervention Management Logic • Theory of Change • Monitoring • Summative • Evidence - Based • Cost Accounting Processes Evaluation Monitoring and • Existing M&E Data • Establishing • Data Collection • Impact Evaluation Baseline Planning Evaluation • Benchmarking Monitoring and • Course Correction (with time lag to • Ex Ante & Feasibility (M&E) Process Data Systems • Interim allow impacts Evaluations manifestation) Source: Original figure for this report. Ideally, all policy interventions and programs should be monitored and evaluated, and the outputs from M&E—that is, the findings from periodic and impact evaluations—should serve as feedback to inform further policy adjustment and reformulation or as evidence to terminate the policy intervention. This policy learning process essentially forms a feedback loop to connect the current policy cycle to the next cycle, in which policies, if not terminated, would be improved and positioned to better serve the target. In reality, driven by limitations in resources (such as budget or staff), capacity, timelines, and policy nature (not all policies have clear boundaries from their contexts or other related interventions), not all policies are subject to M&E, especially impact evaluation. Although monitoring is increasingly common, covering most programs in many innovation systems (for example, in the case of Korea), impact evaluation is typically applicable only to large projects with high priority and solid underpinning data collection efforts.7 6 Ex ante evaluations provide input to the policy design and often involve the use of logic models or frameworks to outline the expected intervention logic, input, output, outcomes, and impacts. Ex ante evaluations also often involve the execution of feasibility studies to assess the planned interventions at the operational level. Including both types of assessment is important so that both the intervention logic and the practical feasibility of interventions can be justified. 7 For a detailed account on impact evaluation, see Gertler et al. 2016. 15 Innovation policy Learning from Korea: the case of monitoring and evaluation (M&E) Challenges Associated with M&E of Innovation Policy in Developing Countries Some challenges are associated with the M&E of innovation policy per se, and some challenges are specific to conducting M&E in developing countries. The identification of those challenges has been based on multiple sources, including extensive literature review; scoping with World Bank’s task teams working on innovation policy in Indonesia, the Philippines, and Vietnam; and findings from the functional review of PERs. Inherent Challenges Associated with M&E of Innovation Policy M&E of innovation policy interventions tends to be complex owing to a range of issues from both the governance and technical levels (Innovate UK 2018). • Diversity and complexity of innovation policy interventions. As noted, innovation policy is diverse—far more so than the traditional, linear thinking on R&D activities. Innovation policy increasingly covers indirect and nonfinancial measures, such as advisory services and networking support, which are difficult to measure, and their impacts can be difficult to attribute. Often the implementation of innovation policy spans multiple line ministries and other authorities, so defining a single “owner” of the intervention is challenging. Moreover, innovation policies often come in the form of “mixes” that comprise multiple instruments functioning in unison, makingit difficult to specify which instrument was responsible for which effects, hence complicating the process of refinement. Complementarity is also a challenge, as policy mixes involve policy measures across multiple sectors, and the effectiveness of those measures could be compromised if the complementarity level is low and contradictions or inconsistencies between different instruments exist. • Uncertainty and long terms of maturity for innovation results. To allow for the materialization of results once innovation inputs have been assigned, innovation policy interventions typically must be sustained for several years. Often, innovation activities must be deployed over several years before their results can be observed. For R&D support during the precommercial stage, the impact of output additionality (that is, additional outputs exclusively attributable to the intervention) takes a particularly long time to manifest. During the initial years following public support, therefore, the returns may appear unpromisingly low. • Difficulty in disentangling results from knowledge spillovers to unintended or indirect beneficiaries. When governments launch interventions, they do not do so in isolation. The businesses being supported are typically situated within dynamic and complex innovation systems, with many institutions operating and interacting simultaneously. Companies might receive support from both local and national governments, from different line ministries, and from different types of policy instruments, especially those targeting small and medium enterprises. Correlating the contribution of individual interventions to observed improvement of innovation performance is therefore difficult. Even though the adoption of experimental methods (such as RCT) addresses that problem to an extent, it is not universally applicable to all innovation 16 Innovation policy Learning from Korea: the case of monitoring and evaluation (M&E) policy interventions, and resources or capacity are usually not sufficient to support the wide adoption of those methods (European Commission 2012). • Limited sample sizes for statistical analysis due to limited outreach of innovation programs to the business population. Compared with broader policies, such as those egulating education and employment, innovation policy typically targets a much narrower scope of the business population. That limitation poses challenges to the collection of statistically significant samples to conduct the sophisticated analysis required for the evaluation of complex policy mixes. • Spillover effect of innovation and limited observability of outcomes, which lead to identification issues. Whereas some innovation outputs are tangible (for example, new products),others are intangible, with their primary output being valuable knowledge embedded within human capital. That intangibility often leads to spillover impacts that are difficult to observe, trace, or quantify. In the near term, tangible returns to the public investment spent on innovation could appear low, while spillover effects remain intangible and often beyond the target group itself. Challenges Associated with Conducting M&E in Developing Countries The M&E of innovation policy faces additional challenges in developing countries compared with developed countries. Those challenges include but are not limited to the following: • Lack of leadership commitment to learn, which hinders adoption of M&E systems. Incentivizing policy practitioners to mainstream M&E as part of their responsibility is difficult. Although, in many developed countries, M&E has been mandated with strong regulation or legislation (such as in the case of Korea), in developing countries, incentives often are not in place to motivate or pressure government officials to mandate M&E into their practice (for example, in certain East Asia and Pacific countries, M&E is included in policy master plans but not really translated into practice). Even worse, some agencies in developing countries resist embracing M&E for fear that funding for innovation programs will cease if their evaluations yield negative results. A predisposition to risk aversion wherein fear of failure (and the loss of face or political standing that failure may bring) can prevent agencies from establishing and maintaining well-functioning M&E mechanisms. In this context, M&E is essentially carried out to fulfill the minimum compulsory “technical” reporting and fiduciary requirements established by law, with the crucial analysis of policy implementation and distillation of its lessons for future implementations being discounted. Under those circumstances, a mindset geared for transparency and an openness to understanding what works and what does not are difficult to engender and sustain. A change in mindset and culture is required, whereby leaders at these agencies promote a culture of open policy learning and practitioners have the space to take informed risks in the pursuit of improving the quality of policy through experimentation with novel policyideas. 17 Innovation policy Learning from Korea: the case of monitoring and evaluation (M&E) • Legitimacy and establishment of business innovation policy relative to other policydomains. The deployable range of innovation policy instruments is diverse and, therefore, often under the purview of different organizations. Whereas in developed countries, we see established practices to bring different authorities together in an umbrella body or “innovation agency” (Aridi and Kapil, 2019) to promote innovation (for example, Vinnova in Sweden, as elaborated in chapter 2 of this study), in most developing countries, different innovation policy instruments reside with different agencies. No specific “innovation policy” domain exists, and different ministries and agencies could be responsible for different aspects of innovation. That structure creates ambiguity in responsibilities, especially in policy implementation and monitoring. Some policies have clear ownership—for example, R&D grants—whereas others need to comply with their “home domain” first—for example, taxes or public procurement. • Difficulty in coordinating between ministries and agencies to streamline M&E of innovation policy. The aforementioned ownership (or primary authority) problem poses a coordination challenge, especially when either the ministries of science and technology do not have sufficient funding or the innovation agenda is not a national priority, which often is the case in developing countries. Consequently, M&E of innovation policy is rarely done with a holistic and long-term perspective. At the working level, coordination challenges could mean limited access to data that are needed for M&E and lack of communication to facilitate transparency, among other issues. • Lack of organizational capacity and expertise to design and deploy M&E and to use M&E for the purpose of learning. Institutional capacity in developing countries is typically underdeveloped, leaving little room for agencies to undertake M&E activities, not to mention analyze collected data to inform future decisions. At the working level, qualified M&E professionals are often scarce, leading to a major capability gap with respect to the integration of M&E tasks into everyday activities. According to the findings from the World Bank’s PERs in STI in Indonesia, the Philippines, and Vietnam, even when M&E is conducted, it is characterized by the absence or very limited use of M&E frameworks, fragmented information management, and incomplete use of M&E results to support learning and inform future policy decisions due to limitations in capacity and expertise. • Limited resources for M&E infrastructure and capacity building. Although evaluation of essential R&D programs is commonly seen as part of regular policy practice, securing funding sufficient to address the M&E needs of all types of innovation policies often is difficult. Beyond funding, modern data infrastructure is another gap, preventing developing countries from leveraging rich data that could come from sources beyond traditional self-reporting methods. Even though donors and development agencies often require and provide resources to undertake M&E, their support typically is very selective and sometimes political, thus impeding M&E from becoming a routinized part of innovation policy practice. Financial resources for M&E activities tend to be limited, and most mechanisms in developing countries offer few incentives or training for staff. 18 Innovation policy Learning from Korea: the case of monitoring and evaluation (M&E) Korea’s Experience in Addressing Challenges Associated with M&E of Innovation Policy As shown in chapters 3 and 4 of this case study, Although Korea still understandably faces the inherent challenges associated with M&E of the complexity and uncertainty related to innovation policy interventions, the country has managed relatively—and increasingly—well to address the challenges discussed in section throughout its process of catching up using an innovation- driven growth model. The Korea experience presents an innovation policy M&E mechanism that has kept learning, evolving, and adapting to increasingly sophisticated policy practices, which has responded dynamically to the needs of business innovation in the country over the decades. Analysis presented in this case study demonstrates that Korea is a practical example for developing countries to observe—not only because of how successfully it has managed to achieve certain M&E objectives but also because of its “imperfection” and how it operated in a reality with limitations and constraints. Reviews of Korea’s experience and international practices revealed that those challenges, and the principles and approaches to addressing those challenges, fall under three main categories: (1) governance, (2) data and methods, and (3) capacity and resources (see chapter 2 for a detailed account). Several the challenges outlined in the preceding section are linked to challenges related to governance, such as a lack of leadership commitment and a lack of holistic view. Some of the challenges relate to data and methods, such as difficulty in disentangling results from knowledge spillovers. Other challenges fall under the category of capacity and resources, such as difficulty in coordination and lack of expertise. Whereas challenges faced by developed countries regarding M&E of innovation policy often concern data and methods, challenges faced by developing countries spread across all three categories. Korea’s experience sheds light on all three categories of challenges; in particular, it offers useful takeaways for developing countries (including Indonesia, the Philippines, Viet Nam, and others) on how to address those challenges, particularly in the categories of governance and of capacity and resources. For one example, Korea addressed the issue of ownership by using mandated M&E frameworks through legislation, providing a basis to establish M&E as part of the innovation policy cycle and paving the way for clearer division of labor and ownership between different authorities and affiliated organizations (notably, MSIT overseeing R&D policies, while MOEF oversees non-R&D innovation policies, with support from their respective designated agencies). For another example, Korea addressed part of the coordination and data collection challenge by mandating and deploying a centralized innovation policy M&E data portal, National Technology Information Service (NTIS), which serves as a one-stop shop for M&E information and data that is open not only to government officials but to the broader group of stakeholders and users . This report unfolds as follows. Chapter 2 presents international good practices in M&E of innovation policy, organized around the three categories: (1) governance, (2) data and methods, and (3) capacity and resources. International good practice serves as a benchmark to lay the groundwork for presenting the Korean experience. Chapter 3 gives a detailed account of the innovation policy M&E mechanisms in Korea (differentiated according to R&D versus non-R&D innovation policies) and uses the example of M&E of R&D tax incentive (RDTI) schemes to 19 Innovation policy Learning from Korea: the case of monitoring and evaluation (M&E) demonstrate in depth how the M&E mechanisms work in practice, followed by lessons learned and limitations. Chapter 4 looks at the Korean experience in contrast with the challenges seen in developing countries and with international practices, and the chapter concludes with takeaways for client countries to prioritize the steps in moving toward better M&E mechanisms for innovation policy.   20 Innovation policy Learning from Korea: the case of monitoring and evaluation (M&E) 02 International Good Practice in M&E of Innovation Policy 21 Innovation policy Learning from Korea: the case of monitoring and evaluation (M&E) More countries than ever have adopted M&E systems across their governments, including the agencies concerned with innovation policy. This chapter presents a framework with categories, principles, and challenges inferred from international experiences. Information for this section was drawn from interviews with international experts and reports on M&E systems with a focus on innovation policy. Analytical Framework In this chapter, we propose a framework to help synthesize lessons for developing an M&E system focused on innovation policy, and we develop a few basic principles within each. We use three broad categories that encompass most of those used in the examples previously provided but classified in a way that suggests an initial division of labor when developing M&E systems. The proposed framework consists of the following categories: • Governance • Data and Methods • Capacity and Resources Governance is a broad understanding of emergent authority in a domain beyond formal government structures. Important values and priorities that shape M&E activities will be considered under the governance category as well as specific normative and legal frameworks that require direct compliance. Patterns of interaction among relevant actors resulting from norms and statutes are also considered under this category. Data and methods address the technical dimension of an M&E system. The appropriate methodological framework and data-gathering systems will be governed by professional expertise and consensus in the respective fields of study related to the domain to which an M&E system applies. The more significant role of expertise, rather than authority and legitimacy, distinguishes this category from the previous one. Capacity and resources address the feasibility of an M&E system. The convergence of priorities with professional judgments does not suffice if the sustained capacity to install and operate an M&E system is not in place. Given that M&E is often considered to be an overhead expense, special focus on this dimension is required. From the interviews with experts and the review of international examples, important principles are revealed and captured under each general category of an M&E system for innovation policy. In this chapter, the M&E and contexts of four countries—namely, Sweden, Australia, Uruguay and the United Kingdom— are used to illustrate these principles. 22 Innovation policy Learning from Korea: the case of monitoring and evaluation (M&E) Principles of Good Practice for Framework Categories Governance a. Develop a culture of transparency The institutional culture within which an M&E system is set to operate has great influence on its effectiveness. Having senior policy leaders assign importance to the function of M&E is essential. There also must be a commitment to transparency in government for M&E to help improve policy results. Transparency and accountability are primary motivators for establishing such a system in the first place, and, in turn, they drive the focus on improving methods and approaches on the technical side. Experience shows that, although M&E efforts often begin with a search for the most sophisticated methods and approaches, they will fail to deliver as expected if the institutional culture does not assign high priority to transparency and accountability.8 Much effort can be wasted when this principle is not followed, because it sets the stage for successful M&E. Further, as the missions of government agencies change, the implications for the fulfillment of the principle of transparency and accountability must be elucidated under the new circumstances. Concerns over corruption may defeat the purpose of pursuing M&E in the first place, as the incentives to discover the need for improvement will be absent. All international experiences considered to be good practice show a commitment to this principle (see the examples of Sweden, Australia, and Uruguay that follow, as well as Korea in the next chapter), one that is ingrained in the rank and file of the organizations responsible for M&E. The culture of transparency also should extend to society at large. Moreover, placing evaluation data in the public realm in an anonymized form can serve a wider set of actors, beyond program managers, such as members of academia and the public sector. When one considers examples from around the world, it is often noted that the culture of accountability and transparency in Sweden has a long tradition; financial audits and process evaluations in the country’s public sector have been a common practice for more than 50 years. Government reform in Sweden during the late 1980s and 1990s led to a larger role for evaluation. Vinnova, the Swedish Agency for Innovation Systems, has responsibility under its overall government framework in the formulation of strategies, setting of objectives, and M&E in the accountability process (Christensen, Laegreid, and Wise 2003). In another example, Australia was an early adopter of government reform that included results– oriented management. This approach led inevitably to an evaluation policy that applies to the entire spectrum of government activities. Beginning in 1983, the Australian government engaged in public –sector reforms to improve the performance of its ministries and agencies significantly. 8 As reported by several evaluation experts interviewed for this report. 23 Innovation policy Learning from Korea: the case of monitoring and evaluation (M&E) These reforms included principles of program management and budget-setting guidelines for sound management practices, collection of performance information, and the regular conduct of program evaluation. The Department of Finance took the leading role in designing and implementing these reforms. In the years since, the specific implementations of evaluation policy have undergone many changes, but the commitment to a broad-based evaluation policy remains. According to one review of this history of Australian M&E policy, the key outcome of the process has been a firmly rooted culture of transparency and accountability (Mackay 2004). The case of Uruguay’s National Innovation Agency (ANII) and its M&E of innovation policies is interesting because of its regularity and sophistication, even though it is not embedded in a national evaluation policy as longstanding and mature as Sweden’s and Australia’s. The country created its national M&E system in 1995, El Sistema de Evaluación de Aprendizaje (The Learning Assessment System; SEV), which focused explicitly on installing a culture of transparency and accountability in all public sector processes (Zaltsman 2006). The government of the United Kingdom (UK) provides another example of the inclusion of evaluation throughout a country’s policy process that are mandated by law and specified in significant detail in manuals that guide all agencies in their evaluation efforts. These manuals are also good examples of a way to instill a uniform culture of transparency by stating in detail the purposes of these efforts and how they contribute to specific requirements of the law. The Magenta Book: Central Government Guidance on Evaluation (UK, HM Treasury 2020) introduces the general evaluation guidelines of the central government indicating that evaluations are required for spending reviews and also are formed in response to potential scrutiny and challenge from several government offices. Among them are the National Audit Offices, Select Committees, Regulatory Policy Committee, and others (UK, HM Treasury 2020, 10). Regarding innovation specifically, Innovate UK, part of UK Research and Innovation (UKRI), has adapted the guidelines to its policy domain, thus specifically acknowledging the transparency and accountability mandate to maintain the legitimacy of its interventions (Innovate UK 2018, 7). b. Establish explicit rules and authority To set the right priorities contributing to a culture of transparency and accountability, explicit rules and authority must provide the proper foundation for the M&E system. Agencies and managers of M&E must have well-defined roles and the proper incentives to carry out this function effectively (Görgens and Zall Kusek 2009). All examples of good practice converge on the need for M&E systems to be set by law.9 The norms mandating M&E not only define requirements and set obligations, but also establish the demand for its compliance function and, if properly designed, match the financial side for the agencies’ mission with its results. Nevertheless, the match must also account for the diverse functionality of missions that results in different timing and the need for flexibility of funding use over various cycles. The statutory framework makes the case for the role of the M&E system in the policy domain, in that it establishes reporting requirements and responsibilities and mandates the use of the M&E results. 9 As reported by experts interviewed for this report and illustrated by the case of Korea developed in this report. 24 Innovation policy Learning from Korea: the case of monitoring and evaluation (M&E) All four of the case examples mentioned in the previous section illustrate this principle. The Swedish government, in recent times, has given greater autonomy to agencies in the formulation of their strategies and settings of objectives, leading to a greater need for evaluations in the accountability process (Christensen, Laegreid, and Wise 2003). As a result, evaluations are carried out by many governmental agencies, and Vinnova leads this effort in science, technology, and innovation. In the case of Australia, the Commonwealth Scientific and Industrial Research Organisation (CSIRO) is evaluated periodically under the country’s general government evaluation policy. The agency partners with outside evaluators to carry out performance evaluations of its portfolio (Industry Innovation and Science Australia 2016). Uruguay’s innovation agency, ANII, has the explicit mandate to manage a portfolio of innovation policies with stable objectives that have been carried out since 2008 (Bukstein et al. 2020). The portfolio consists of 20 instruments that include financial incentives, grants, and support for specific innovation activities such as prototyping, individual entrepreneurship, and innovation marketing, among others. ANII periodically evaluates the entire portfolio of innovation instruments under its purview. The Magenta Book recommends explicitly that evaluation should be conducted as a collaborative activity between independent specialist evaluators and the policy design and implementation teams (UK, HM Treasury 2020 18). It presents an overview of the evaluation process with scoping, data collection and methods selection, evaluation management, and use and dissemination of evaluation findings with clear indication of roles and responsibilities at each stage (UK, HM Treasury 2020 18ff). When referring to evaluation management, it states, “It is crucial to set up effective governance structures at the beginning of an evaluation” (UK, HM Treasury 2020 ,70). Three typical government arrangements are mentioned—namely, a Policy Program Board, an Evaluation Steering Group, and an Expert Peer Review panel. c. Empower insider ownership The governance framework can also enhance or diminish the potential quality and usefulness of the M&E system, depending on the role assigned to the office-level staff. Managers of policies under evaluation have the most knowledge about the context of implementation and will also be responsible for carrying out changes and improvements stemming from evaluation results. These individuals provide crucial access to continuous monitoring information and data needed for future evaluations. In addition, their sense of commitment to the culture of transparency and accountability goes a long way in providing inspiration and leadership for proper M&E implementation. A governance framework that provides for insider ownership will increase the chances of using evaluation for learning and continuous improvement. Even though the objectivity of M&E often requires the participation of outsiders, a proper balance between insider and outsider involvement has a greater chance of success and sustainability with respect to an M&E system. Insider ownership also ties the professional reputation of staff to the quality of their M&E effort, reinforcing the culture of transparency and accountability with a positive incentive. For example, Australia’s CSIRO includes in its methodology a benchmarking of its performance against international peers and its definition of goals and measures as well as an identification of relevant actors and key stakeholders. The evaluation framework developed by Uruguay contains an explicit classification of all instruments by type of objective and the relation of objectives to market and system failures. From this follows a three-stage model that addresses key components 25 Innovation policy Learning from Korea: the case of monitoring and evaluation (M&E) of its policy strategy: (1) input additionality (as opposed to crowding out), (2) firm innovation behaviors, and (3) firm performance impact on the market. The Magenta Book on evaluation guidance specifies this principle by recommending the constitution of a Policy Program Board to insure buy-in from decision makers and the use of evaluation findings. d. Focus on learning as well as on accountability Directly following from the previous principle, M&E systems should be established with a focus on learning and the continuous improvement of policy design, as well as on accountability. This prioritization leads to roles, not only for M&E practitioners—whether insiders or outsiders—but also for a broader set of stakeholders. This set would include other interested parties in the public sector, among those affected by the policy as potential direct or indirect beneficiaries, and professional communities with important knowledge capital related both to M&E and the policy domain. Ideally, the process would include significant interactions among the stakeholders and a practice of making evaluation results public. With a focus on learning, there is greater likelihood that there will be willingness to identify that a policy is not working as expected. The Uruguayan system indicates, among its objectives, support for continuous improvement and for informing budget allocation. An Inter–American Development Bank (IDB) study identified Uruguay as one of the Latin American countries that made the most progress on its M&E system from 2007 to 2013 (Kaufmann, Sanginés, and Moreno 2015). This principle is stated as a fundamental principle of evaluation under the central government guidance for evaluation in The Magenta Book. The executive summary states the complementarity of learning and accountability as the crucial purpose of evaluation of government policy (UK, HM Treasury 2020, 5). Its application to specific areas, such as innovation, is also explicitly stated. In the presentation of its new evaluation framework in 2018, Innovate UK makes the same point to explain why evaluation of innovation policies is important (Innovate UK 2018, 7). e. Adapt to the institutional and political context The application of these principles to specific implementations of an M&E system must be adapted to each country’s context and its specific institutional arrangements. This is the reason for suggesting “principles” that require interpretation to uncover their relevance to the specific context. Therefore, interpreting each principle for the context becomes, in itself, a principle in the governance of M&E systems. The obstacles to overcome, the available capacity and organizational roles, and the organizational culture may lead to very different paths toward the goal of a high-quality and effective M&E (Edler et al. 2012). In some cases, there are frequent changes in government ministries and agencies that require an adaptable M&E system to remain relevant and viable. In developing nations, corruption is often an important concern that tends to put greater emphasis on accountability, requiring great effort to keep the learning side in focus. The use of evaluations impinges on the setting of priorities and design of government interventions, which have inevitably political implications. The degree to which the involvement of politics affects the proper operation of M&E and its evolution will reflect this process. It has also been noted that innovation policy, in particular, has a weaker 26 Innovation policy Learning from Korea: the case of monitoring and evaluation (M&E) position in the policy environment, and ambiguous or negative evaluation results consequently tend to call into question a government’s role. This is not generally the case in other policy areas, where the role of government is not doubted even when policies are found not to work as intended. It has been noted that Sweden differs from other countries in how it implements similar principles related to innovation policy and M&E. Sweden’s own political culture of consensus seeking and division of labor across sectors results in a decentralized approach to policy evaluation in general (Nordesjö 2019). The UK, on the other hand, has a much more centralized approach to evaluation, as reflected in the detailed guidance from the central government. The policy-making process that is applied across the UK government makes evaluation a core component that permeates it. The process, labeled ROAMEF (Rationale, Objectives, Appraisal, Monitoring, Evaluation, Feedback) includes evaluation dimensions in every stage. Data and Methods a. Define the focus of the evaluation The operations of an M&E system must be congruent with the content of the policies they serve. Generic definitions of activities, milestones, and goals of many policies do not suffice to guide the design of a system intended for useful M&E. The M&E system also must include a precise definition of the focus required by each policy. It is not unusual, at first, for policy makers and staff to have rather vague notions of what would constitute the success of policies under their purview. Moreover, in the case of innovation policy, most changes expected of government interventions occur at the system level (that is, the society and its economy), where goals and measures of their attainment are very difficult to define. This difficulty is exacerbated by the recent shift in orientation of innovation policies to missions aimed at broad social development goals, keenly exemplified in the European context. Taking the path of least resistance may lead to complying with the requirements of M&E while defining goals much more narrowly than the true purpose of the policy. As a consequence, much of the benefit of having an M&E system would be lost in pursuit of a reduced measure of guaranteed success. International good practices have an explicit focus on the effects of the policies on the system. In the case of Sweden’s Vinnova, its evaluation approach focuses on the results of innovation policies, a focus identified in the definition of its mission—setting up the framework for evaluations by design. The Australian approach also reflects the principle of focusing the M&E system on determination of system–level results. The goals and measures, as well as the relevant actors and stakeholders, are clearly established. The Uruguayan case also illustrates the focus on impact, in the concept of additionality, and an up-to-date assessment method, given the difficulties inherent in successful innovation policy. The Innovate UK framework specifies the focus through in-depth understanding of the intervention’s logic model as the first step of the evaluation (Innovate UK 2018, 16). b. Methods matter but have limitations M&E has many technical challenges, so it is not surprising that the bulk of the literature is devoted 27 Innovation policy Learning from Korea: the case of monitoring and evaluation (M&E) to methodological discussion. Policies generate complicated phenomena in the wake of the interventions to the social systems they affect. Analyzing these phenomena to assign their causes and determine their effects, both desirable and undesirable, often presents serious difficulties. In addition, teasing out the effects of the policy from other factors that may have confounding consequences calls for sophisticated analytic methods. In some countries, experimental techniques such as randomized controlled trials (RCTs) to prove policy effectiveness are considered the gold standard of evaluation methods. Nevertheless, they are complicated and challenging to implement and often inappropriate for policies in which comparative arrangements with random assignments are not feasible. Experience also suggests that increased sophistication may come at the price of reducing the chances of implementing results when other conditions are not met. The need for appropriate methods is generally met by the availability of specialized technical resources, both internal and external, to implement them. In sum, M&E can be carried out with benefit, including learning for continuous improvement, with an adequate choice of methods selected from a relatively large menu of options, given the specific context and means. International examples mentioned in this chapter use multiple methods for the M&E efforts depending on the needs of the policy in question. The Magenta Book acknowledges the complexity of policy interventions that require flexibility in the approach to evaluation projects (UK, HM Treasury 2020, 74). The manual presents a long list of possible methodological approaches and under which circumstances they might be useful. The key point it makes is that the concern for quality and consistency does not lie on a single choice of method., In its evaluation framework document, Innovate UK raises the issue of the special challenges in evaluating innovation policies. These include availability of information, heterogeneity of beneficiaries, low observability of knowledge for innovation, and rapid change in the business environment, among other challenges (Innovate UK 2018, 10). The report also presents creative approaches to overcoming some of these challenges. c. Data are important When governments are forming innovation policy and procedures, the importance of gathering the right kind of data at the right time cannot be overemphasized. Technical recommendations for high-quality M&E systems often begin with observations about data. However, the approach to data gathering and analysis based on up-to-date data science is dependent on the governance framework identifying it as a priority and dedicating the proper scope to its execution. The most recent examples of best practice aim for data systems capable of real-time mapping of the innovation ecosystem. In this sense, monitoring is not a mere subsidiary task for routine administration. A significant degree of learning occurs by dynamically observing the system beyond the indicators of activity of individual policies. The perspective on data that leads to a well-grounded approach is defined by the goal of evidence-based policy. Therefore, it is not only the gathering of data alone that is at stake, but also the quality of the processing and analysis to produce evidence for decision-making. In other words, the credibility of evidence of policy effectiveness is crucial, both to learning and to evidence-based policy. In Australia, the model, with its goals and measures, guides systematic data gathering to produce a scorecard with 250 metrics for relevant performance indicators, including benchmarking against international peers. In Uruguay, the data system in place is associated with the overall national 28 Innovation policy Learning from Korea: the case of monitoring and evaluation (M&E) evaluation framework, and the local agency’s responsibility also contributes to a high-quality evaluation approach. The importance of data is stressed in the UK evaluation guidelines provided in The Magenta Book. The evaluation scoping begins with establishing the availability of data about the intervention and its context (UK, HM Treasury 2020, 21). The guidelines also indicate that planning for data required in evaluations should be done alongside the design of the policy. Gaining data access and linking to all necessary and potential sources are unlikely to be successful after the policy has been implemented and run its course. Aside from types and sources of data, there is strong emphasis on data quality and handling. Approaches to quality checks and data-handling standards are described (UK, HM Treasury 2020, 63). Innovate UK has begun real-time data gathering for some innovation support policies to overcome the relatively small sample size of firms receiving this support (Innovate UK 2018, 10). Capacity and Resources a. Tie evaluation to policy design capabilities A professional consensus on the introduction of evaluation considerations during the design stage of any policy does not always translate into actual policy-making practice. For policies to be effectively formed, policy design capabilities must include evaluation capacity, especially in the deployment of sufficient human resources and in the identification and development of appropriate practices at the design stage. As mentioned in relation to the definition of the proper focus evaluation, the formulation of the policy is key to what follows in M&E. Proper design also requires evidence that the proposed policy approach to the problem is reasonable. In other words, the evidence-based policy approach requires M&E capacity from the inception of the policy. In terms of practice, feasibility studies (often labeled “ex ante evaluations”) are needed at this stage and demand appropriate human and financial resources. Moreover, experiments that may be necessary at the design stage have their own requirements for adequate resources. The design stage outlines the demand for capabilities and also assigns resources to the M&E of the policy. The Magenta Book, Innovate UK, and many specific guidelines for evaluation stress this principle abundantly. The need to plan for data requirements during the design stage has been mentioned, but the plan also must include the development of logic models or theories of change during design, budgeting, allocation of human resource capabilities, access to beneficiaries, monitoring processes, and relations with stakeholders, among other considerations, all of which are tied both to the design of interventions and to their evaluation. b. Leverage both internal and external resources to develop capacity Capacity requirements for policy implementation are generally dynamic, and the specificities of M&E are no exception—they change with the changing policy environment and the learning process that the M&E system itself enables. Three main sources of continuing support for developing capacity are found in good-practice examples. First, support from international partnerships and multilateral organizations has proven key to enabling the design and implementation of M&E systems in developing nations. This support provides well-established vehicles of knowledge 29 Innovation policy Learning from Korea: the case of monitoring and evaluation (M&E) transfer and, oftentimes, access to initial funding. Second, communities of practice—both domestic and international—provide access to up-to-date professional advice and support. These communities represent and offer reservoirs of knowledge and expertise that cannot be afforded by agencies themselves. Third, many developed nations install specialized agencies with a strong professional profile to provide M&E services and quality control to the evaluation offices in ministries and other agencies. Vinnova possesses the expert human capital to address the evaluation needs of innovation policy, and the staff members interact fruitfully with external professionals. The agency draws upon the expertise of the professional community in the country and international entities, especially the European Union and its European Network of Innovation Agencies (TAFTIE) organization. In Australia, all agencies have a mandated evaluation component within their budgets. CSIRO also draws on external expertise to produce periodic performance evaluations (Industry Innovation and Science Australia 2016). In recent exercises, the private consulting firm ACIL Allen carried out its performance review. Uruguay has drawn on the Inter-American Development Bank (IADB) and the World Bank to increase its evaluation capabilities (World Bank 2015). The Magenta Book warns against underestimating the resources needed to conduct an evaluation (UK, HM Treasury 2020, 68). It breaks down the needed resources into several categories— namely, financial, management, analytical, policy, delivery bodies, wider stakeholders, and post- delivery resources. This list of categories highlights the complementarity of internal and external resources, especially in terms of analytical skills, access, and inherent participatory requirements. Innovate UK also indicates its reliance on external resources, especially about innovative methods and analytical skills, drawing both on the broader UK government as well as those from the European Commission (Innovate UK 2018, 4). 30 Innovation policy Learning from Korea: the case of monitoring and evaluation (M&E) Concluding Remarks The implementation of the principles for M&E system best practices presents significant challenges, even to developed countries. Therefore, it should come as no surprise that applying these principles in the contexts of developing countries will not be direct. Nevertheless, understanding a few practical considerations can contribute to the success of an M&E system. First, establishing a culture of transparency and accountability may take many years and often requires a broad national initiative. The inculcation of this beneficial culture is unlikely to result from the initiative or effort of a single agency and its team. Second, even within a mature culture of transparency and evaluation, establishing explicit rules and authority with proper incentives set by statute may go through several iterations and institutional changes. This is true in most developed countries, and therefore, should be expected in developing countries as well. Third, evaluation requires specialized capabilities that are not inherent in all public organizations. These critical capabilities must be acquired or provided for, and building them and their capacity requires both time and significant resources. A sustained commitment to and development of supportive domestic and international collaboration may be necessary. However, it is possible, and even smaller developing countries, as in the case of Uruguay, have made significant progress in this direction by including highly sophisticated and regular evaluations of their innovation policy portfolio. Fourth, rendering the entire policy formulation process harmonious with the evaluation requirements can place great demands on a country’s policy system. Part of the observed evolution of evaluation policy in developed nations has been concerned with adjusting their policy formulation to the use of evidence and providing for it. This evolution has also been observed to involve the political process and the well-known competition for scarce resources across government organizations. Fifth, credible and flexible data systems are not easily implemented even when national level M&E systems are in place. These data systems are often designed for very specific budget-related processes that often do not fit the needs of evaluation policy. Making room for innovation-related M&E needs within established systems may be unavoidable but, nevertheless, often remains a politically fraught process. Finally, M&E must ultimately aim at a systemic approach to incorporating the effects of complementarities among programs, the indirect effects due to the complexity of the innovation system, and the aim of impacts at the system level. These are not easily addressed and require significant expertise. These capabilities must also be built progressively over time. Specific recommendations stemming from these principles and challenges, along with the presentation of the experience of Korea, are presented in the next chapter. 31 Innovation policy Learning from Korea: the case of monitoring and evaluation (M&E) 03 M&E of Innovation Policy in Korea 32 Innovation policy Learning from Korea: the case of monitoring and evaluation (M&E) This chapter introduces Korea’s approach to the M&E of innovation policy. "The country's M&E framework is centered on performance, aiming to enhance the efficiency of public resource allocation. For the most part, the Korean system aligns with the recommended practices introduced in chapter 2. Korea’s M&E system for innovation policy has two parts: (1) M&E of budgetary R&D policies and (2) M&E of budgetary non-R&D policies (figure 3.1). This separation follows the distinctive management structures, independent M&E framework on a separate legal basis of R&D policies owing to its national importance and large budgets (Yoon 2014). The first two sections of this chapter correspond to the two M&E budgetary frameworks (R&D and non-R&D). Note that the M&E of tax incentive plans for R&D—which make up a large portion of the country’s innovation policy in terms of magnitude of public spending—is separately managed. Because the framework for M&E of R&D tax incentives largely resembles those introduced in this chapter, it is discussed as an illustration of Korea’s innovation policy M&E system. This will provide more operational details, particularly in the category of data and methods, to "policy makers in other countries" who are considering learning from the Korean system by showing how the M&E framework is implemented. FIGURE 3.1. Structure of Innovation Policy M&E in the Republic of Korea M&E of Innovation Policy M&E of non-R&D M&E of R&D Programs Innovation Programs Ministry of Science and Ministry of Economy and ICT (MSIT) Finance (MOEF) Korea Institute of S&T Evauation Korea Institute of and Planning Public Finance (KISTEP) (KIPF) Line Ministries Line Ministries Source: Original figure for this publication. Note: ICT = information and communication technology; M&E = monitoring and evaluation; R&D = research and development; S&T = science and technology. 33 Innovation policy Learning from Korea: the case of monitoring and evaluation (M&E) M&E of Budgetary R&D Policies In Korea, the Act on the Performance Evaluation and Management of National Research and Development Projects (hereafter referred to as the 'Performance Evaluation Act') provides the basis of M&E of R&D programs and projects. The Performance Evaluation Act defines the goal of M&E as the evaluation and management of national R&D programs and projects based on performance, thereby increasing the effectiveness and accountability of investments in R&D. The act and its enforcement decree specify M&E measures and the R&D programs and projects subject to them. Some of the principles that guide the M&E process, which are detailed in the act, include the following (Korean Law Information Center 2020): • Respect the creativity of researchers and consider the characteristics of the R&D programs, projects, and research institutes. • Enhance the credibility of evaluation results by ensuring professionalism and fairness. • Avoid repetitive evaluations by sharing the results of R&D program assessments. • Reflect the results of the performance evaluation in establishing related policies, carrying out programs, and adjusting budgets. More specifically, the performance evaluation act requires M&E measures to be structured in three layers, as shown in figure 3.2, and clarifies the governance structure. The 5-Year Master Plan sets directions and plans for the mid to long term. Based on this master plan, the Ministry of Science and ICT (MSIT) develops detailed M&E plans for line ministries’ R&D programs and projects. One of the key measures discussed in this chapter is self-evaluation. In accordance with the MSIT’s M&E plans, line ministries monitor and evaluate their R&D programs and projects. The outcomes of these self-evaluations are then subject to a high-level evaluation by the MSIT. A key rationale behind this multilayered evaluation structure is that line ministries, which develop and implement R&D programs and projects, are in the best position to evaluate them with their knowledge and expertise, thereby promoting autonomy. This structure also enables the MSIT, as mandated by law, to oversee M&E of R&D policies, as it requires a deep understanding of the R&D activities’ unique characteristics and technical expertise. The next sections will examine each of these M&E mechanisms in turn. 34 Innovation policy Learning from Korea: the case of monitoring and evaluation (M&E) FIGURE 2.2. Hierarchical Structure of M&E Mechanisms for National R&D Policies 5-Year Master Plan for Performance Evaluation of R&D Programs High-Level Evaluation by the Ministry of Science and ICT Self-Evaluation by Line Ministry Source: Original figure for this publication. Note: ICT = information and communication technology; M&E = monitoring and evaluation; R&D = research and development. 5-year Master Plan for Performance Evaluation of R&D Programs The 5-Year Master Plan for Performance Evaluation of R&D Programs is developed every five years in accordance with the Performance Evaluation Act, with the goal of establishing an effective evaluation system that continually improves the performance of national R&D programs and projects.10 Begun in 2006 and now in its fourth iteration, the 5-Year Master Plan has gradually increased the sophistication of the M&E system for R&D policies, as shown in table 3.1. Notably, the adjustments made over time have amplified the core values of responsiveness, simplicity, comprehensiveness, coherence, autonomy, and accountability in the M&E system (table 3.1). 10 Beginning in 1999, before the first 5-Year Master Plan was implemented in 2006 by the Ministry of Science and Technology, the National Science and Technology Commission, a pan-governmental organization chaired by the president, was in charge of M&E of R&D programs (Bae, Chung, and Seong 2014). During this time, there were several limitations. First, the selection of e valuators did not sufficiently account for the specific attributes of the R&D program in question, leading to concerns regarding their expertise and impartiality. Second, evaluators were given only two months per year for the evaluations and had to use the same evaluation criteria and method for various R&D programs, which led to low levels of reliability and utilization. Third, there were no institutional mechanisms to enforce M&E and the use of results to improve the performance of R&D programs. The limitation was primarily due to the National Science and Technology Commission's insufficient authority to design and implement Monitoring and Evaluation (M&E) processes, attributable to its status as a non-permanent governmental body. These limitations forced a reform of the M&E system for R&D programs. With the increasing budget size of R&D programs and the need to manage their performance more effectively, the Korean government established the Science, Technology and Innovation Office within the Ministry of Science and Technology in 2004. Subsequently, the office was empowered by law tooversee M&E of R&D programs, which led to the launch of the 5-Year Master Plan for Performance Evaluation of R&D Programs. 35 Innovation policy Learning from Korea: the case of monitoring and evaluation (M&E) TABLE 3.1. Key Priorities and Achievements of 5-Year Master Plan for Performance Evaluation of R&D Programs 5-Year Master Plan for q Performance Evaluation of R&D Key priorities and achievements Programs 2006–10: first 5-Year Master Plan • Made performance the foundation of Korea’s M&E system for R&D programs, motivated by the need to assess performance of the National R&D Program • Introduced a dual M&E system composed of high-level evaluation conducted by MSIT and self-evaluation done by line ministries • Extended the evaluation cycle (from annual to three- year) and evaluation period (from three months to eight months) for in-depth evaluation • Allowed a higher degree of freedom in evaluation by expanding the number of evaluating indicators from 15 to 162 that line ministries can choose from 2011–15: second 5-Year Master Plan • Initiated transition to quality-focused evaluation and expanded qualitative project evaluations by experts that aimed for a deeper assessment of achievement (for example, significance of a scientific discovery) beyond quantitative measurements (for example, number of patents generated) • Expanded tailored evaluations that consider distinct characteristics of programs, such as their mission, size, and type • Alleviated M&E burden imposed on researchers (for example, requirements on frequent reporting) and strengthened their autonomy in project evaluations 2016–20: third 5-Year Master Plan • Expanded efforts to establish a researcher-centric evaluation system and moved away from a managerial efficiency-focused system • Continued to alleviate the M&E burden imposed on researchers in performing project evaluations • Focused on the overall quality of project evaluations • Separated the R&D program evaluation from the evaluation of institutional operation to promote stable research environments 36 Innovation policy Learning from Korea: the case of monitoring and evaluation (M&E) 2021–25: fourth 5-Year Master Plan • Promoted autonomy and responsibility of primary agents of R&D activities based on transparency • Establish a nationwide strategic evaluation system • Respect the diversity of R&D activities • Strengthen links between policy, investment, and performance Source: Original figure for this publication, based on Bae, Chung, and Seong 2014; MSIP 2015; MSIT 2020; and STEPI 2015. Note: M&E = monitoring and evaluation; MSIT = Ministry of Science and ICT of the Republic of Korea; R&D = research and development. Through the 5-Year Master Plan, the Korean government sets directions and plans for mid- and long-term evaluations. The MSIT leads the process of drafting the plan. To review the existing system and systematically identify areas for improvement, the MSIT forms a task force composed of policy makers with experience in evaluating programs, projects, and institutions, as well as external experts specialized in M&E. In addition, committees and conferences are organized to seek inputs from stakeholders, including research institutes subject to the evaluations. Once a draft becomes available, the MSIT seeks additional input and feedback through surveys, conferences, and public hearings with line ministries, industry stakeholders, and academic scholars, among others. Since 2017, a final draft is subject to deliberation by the Presidential Advisory Council on Science and Technology (PACST), which is the highest-level advisory and deliberative council in the field of science and technology, to ensure that it is coherent and consistent with all areas of national priority. As described, the 5-Year Master Plan, along with the performance evaluation act, serves as the foundation of more specific M&E measures put in place. There are three types of evaluation to which the 5-Year Master Plan’s M&E framework applies. First, program evaluations are conducted for national R&D programs that are designed and implemented by line ministries. Second, project evaluations are performed for R&D projects executed by individual researchers or research institutes and funded by line ministries. Third, institutional evaluations are run on R&D programs of government-funded research institutes, as well as on the operation of such institutions. Figure 3.3 provides an overview of the M&E system and the three types of evaluation. These evaluations are required by the act, and evaluation results must be reported to the National Assembly. Of the three types of evaluation, the remainder of this section focuses on program evaluations given that they are of the greatest relevance to developing countries. Employing the framework in the 5-Year Master Plan, the MSIT supervises the overall M&E of national R &D policies, not only for its own programs but also for programs of other ministries. As the government entity with primary responsibility for the M&E of national R&D policies, the MSIT provides detailed guidelines to line ministries so that those pursuing their own R&D programs and projects can properly plan and implement M&E. In addition to requiring the evaluations, the act also requires the MSIT to publish detailed guidelines and properly communicate the evaluation results with other relevant ministries and agencies. Section 3.1.2 elaborates on this structure. 37 Innovation policy Learning from Korea: the case of monitoring and evaluation (M&E) FIGURE 3.3. M&E Framework for National Budgetary R&D Policies Formulation Implementation Termination Follow -u p Line Ministries Setting Self - Self-evaluation Self-evaluation evaluation Follow-up Performance Mid-term Expost target evaluation evaluation evaluation Program High-level High-level High-level Evaluation Ministry of Examination evaluation evaluation evaluation Science and ICT (STI Office) Special evaluation Line Ministries Evaluation for Final evaluation Mid-term evaluation Follow-up evaluation projects election Project Evaluation Ministry of Science and ICT (STI Office) Line ministries Self review of and other performance plan Mid-term consulting Self -evaluation of performance report institutes Institutional Evaluation Ministry of Science High-level review and ICT High-level evaluation (STI Office) Source: Original figure for this publication, based on MSIT 2020. Note: M&E = monitoring and evaluation; ICT = information and communication technology; R&D = research and development; STI = science, technology, and innovation. InnovatIon polIcy learnIng from Korea: the case of monItorIng and evaluatIon (m&e) 38 Line Ministries’ Self-Evaluations and the MSIT’s High-Level Evaluations Within the MSIT, the Science, Technology, and Innovation (STI) Office is responsible for the supervision of M&E of R&D policies. Guided by the 5-Year Master Plan, the STI Office drafts and shares detailed guidelines on how ministries with R&D programs should plan and perform self-evaluations of their own R&D programs, as required by law.11 The fact that the Performance Evaluation Act and Management of National R&D Projects, Etc. provides the legal basis for the STI Office to supervise the M&E process makes it easier to coordinate with line ministries. In developing detailed guidelines, the STI Office receives the support of the Korea Institute of Science and Technology Evaluation and Planning (KISTEP). KISTEP is an MSIT-affiliated research institute established by law,12 with its key responsibilities to support the MSIT in planning, evaluating, and managing national R&D programs. KISTEP's expertise in science and technology permits it to play a crucial role in developing standardized performance indicators and performance targets, among other functions. The STI Office then communicates these M&E guidelines to line ministries for their R&D programs. The Framework Act on Science and Technology13 also mandates that the MSIT organize education and training sessions for policy makers in charge of performance evaluations and managers of research outcomes to improve their capabilities. This activity is also supported by KISTEP. Ministries plan for and implement M&E of their R&D programs based on the guidelines provided by the STI Office (figure 3.4). In addition to standardized performance indicators and targets, line ministries can also propose their own performance indicators and targets, which, by law, must be reviewed and approved by the STI Office before their implementation.14 To help line ministries set them properly, the STI Office organizes information sessions. These indicators and targets are later used to evaluate performance in the middle or at the end of the implementation cycle (figure 3.5). In principle, R&D programs are evaluated every three years, with certain exceptions detailed in the guidelines from the STI Office. For example, new programs are not subject to an- nual self-evaluations until their third year of implementation considering it takes time for R&D activities to generate meaningful outputs. 11 Articles 7 and 8 of the Act on the Performance Evaluation and Management of National Research and Development Projects, Etc. 12 Article 20 of the Framework Act on Science and Technology (hereafter, ST Act). 13 Article 17 of the ST act; art. 14 of the Enforcement Decree. 14 If a line ministry’s own performance indicators or targets are deemed inappropriate by the STI Office, the ministry is given an opportunity to provide explanations (MSIT 2019). 39 Innovation policy Learning from Korea: the case of monitoring and evaluation (M&E) FIGURE 3.4. Annual Process of Setting Line Ministries’ Performance Indicators and Targets Ministry of Sciences and ICT Line ministries Line ministries Ministry of Science and ICT Selection of eligible programs Establishment of Self-review of preformance High level review and distributuon of guidelines performance targets and indicators October-December April-May targets and indicator July-September May-June Source: Original figure for this publication, based on MSIT 2019. Note: ICT = information and communication technology. FIGURE 3.5. Process of Midterm and Ex Post Self-Evaluation Type Of Evaluation Evaluation steps Line ministries Ministry of Self-evaluation Line ministries Line ministries Science and ICT by line ministries Self-evaluations Confirmation of self Submission of (April-August) conducted evaluation results self-evaluation results High-level Receipt Receipt Adjustment of Confirmation Recommendations Evaluation results evaluation by of self- of self- score/grade of high-level for improvements submitted to the the Ministry of evaluation evaluation (if determined evaluation and evaluation Presidential Advisory Science and ICT results results to be results by High- results delivered Council on Science and (September– (by August (by August inappropriate) level Evaluation to line ministries Technology and the November) 31st) 31st) Committee National Assembly Source: Original figure for this publication, based on MSIT 2019. Note: ICT = information and communication technology. 40 Innovation policy Learning from Korea: the case of monitoring and evaluation (M&E) Box 3.1. R&D Program Self-Evaluation Committee Composition • A committee must have five or more members, including one chairperson. • Membership should be distributed properly to include women and experts from academia, receives • In forming subdivisions, the number and characteristics of the R&D programs subject to evaluation should be considered. • In forming a committee and its subdivisions, the following selection criteria are applied. Criteria To be selected to be a committee member, the person must be an • Expert who has 10 or more years of professional experience in the field • Expert who has 5 or more years of R&D experience in the field • Expert who is an assistant professor or higher at a university • Expert who is a manager or higher at a firm in the field Who cannot be an evaluator • Government official in the central administrative agency managing the R&D program being evaluated or employee of a special institution related to the R&D program • Person in charge of a project pursued as part of the R&D program being evaluated • Sanctioned expert who currently cannot participate in national R&D programs • Expert who has a history of unfair or insincere evaluation • Employee of an organization supporting high-level evaluation • Other experts who could harm the principle of fairness of evaluation Although self-evaluations are managed by line ministries, the evaluation itself is conducted by committees for self-evaluation, which are composed of external experts from industry, academia, and research institutes. the guidelines from the MSIT lay out detailed and stringent criteria for selecting evaluation committee members (MSIT 2018) (see box 3.1) For instance, government officials and researchers from research institutes who were directly involved in an R&D program cannot be selected to be evaluation committee members for that program. In this way, the Korean government prioritizes objectivity and independence of self-evaluations. Once line ministries submit their self-evaluations, the STI Office conducts high-level evaluations to ensure that the submitted self-evaluations were properly completed. The high-level evaluation has two stages. First, R&D programs’ evaluation procedures, evidence, and results are examined to determine whether the evaluation was appropriately conducted. Second, should an R&D program pass this test, it will not be subjected to further inspections. However, failure to meet the test criteria subjects the program to a subsequent, stricter inspection. This could result 15 This performance-based budget allocation is stipulated in article 10 in the Law on National Research & Development Monitoring and Evaluation. According to the midterm report of KISTEP’s high-level evaluation results, between 2015 and 2019 41 Innovation policy Learning from Korea: the case of monitoring and evaluation (M&E) in a lowered performance evaluation and, ultimately, a reduction in the program's budget15 It is important to note that the STI Office acknowledges only the achievements specifically listed in the self-evaluation reports. Furthermore, in the case of the nine major achievement categories,16 only those achievements registered with the National Science and Technology Information Service (NTIS)—a digital knowledge management platform for science and technology—and specialized research management institutes are recognized. Evaluation results inform the following year’s budget allocation and adjustment. In principle, programs that receive a grade of excellent or above during midterm evaluations may expect an increased budget for the following year. On the other hand, the budgets of programs that receive an unsatisfactory or below are generally reduced. The results of ex post evaluations are used for follow-up impact evaluations and for allocating or adjusting budgets for research institutes. Also, they provide valuable insights for future R&D programs or programs with similar elements. Most performance evaluation results are made public via the NTIS portal, aligning with the principle of transparency and openness. Because of the high level of scrutiny and performance management practices of the STI Office, line ministries have a strong incentive to pay attention to the quality of their self-evaluations and actively share information related to the performance of their R&D programs. After the termination of an R&D program, a follow-up evaluation may be conducted to understand the long-term impact of the program17 The follow-up evaluation aims to capture the program’s effect comprehensively by measuring not only the impact in the field of science and technology but also its social and economic impact (MSIT and KISTEP 2019; MSIT and KISTEP 2021). For instance, a major focus of the evaluation is to measure the impact an R&D program generated through technology transfer and commercialization in the past five years since the termination of the program. Programs subject to such follow-up evaluations are determined through deliberation by relevant evaluation committees. Programs with large budgets and follow-up programs are the main targets. The evaluation structure is similar to that of evaluations in previous stages. That is, line ministries conduct self-evaluations to measure impact, and the MSIT performs high-level evaluations to ensure accountability. Evaluation results are used for several purposes. First, if the final grade is “satisfactory” or above,18 existing R&D programs of similar nature could receive additional funding. Second, evaluation results serve as a reference point when policy makers design and implement similar R&D programs. Third, positive evaluation results could lead to rewards for the researchers, agencies, and ministries involved in such successful programs. Findings from follow-up evaluations are made public on the NTIS platform to inform future policy making and enhance accountability. Although there have been frequent changes to the mandates of the STI Off ice over the past few decades, the legal framework has not changed significantly. The expectation that line-ministries perform self-evaluations of their R&D policies and that the STI Office ensures such evaluations are conducted properly has been in place since 2005. Over the past 15 years, this M&E system has become increasingly sophisticated through its quest to render the evaluation plan ever more transparent and objective, a process that will be discussed as a lesson later in chapter four. 16 Academic papers, patents, original copies of reports, technological summaries, software (copyright and technology details), research facilities and equipment, chemical compounds, biological resources (biological information and life resources), and new varieties. 17 The follow-up evaluation is based on articles 7 and 8 of the Act on the Performance Evaluation and Management of National Research and Development Projects, Etc. 18 There are five grades: (1) very satisfactory, (2) satisfactory, (3) moderate, (4) unsatisfactory, and (5) very unsatisfactory. 42 Innovation policy Learning from Korea: the case of monitoring and evaluation (M&E) M&E of non-R&D policies As mentioned above, M&E of budgetary R&D policies and budgetary non-R&D policies are managed separately owing to the unique characteristics of R&D versus non-R&D policies and activities. The former is managed by the MSIT, while the latter is managed by the Ministry of Economy and Finance of the (MOEF). For the case of budgetary R&D policies, M&E policies has a firm legal basis: The National Finance Act18 and its enforcement decree. Additionally, detailed guidelines are provided by the MOEF with support from the Korea Institute of Public Finance (KIPF), a government-funded research institute s pecializing in financial policy. The finance act clearly states that the operation of public resources should be linked to performance and furnishes the MOEF with the necessary authority to plan and enforce M&E of policies, including innovation support programs and projects. More specifically, the M&E system for budgetary non-R&D policies aims to increase accountability and transparency of the government’s financial operation by making key information publicly available,19 improving technical efficiency of budgetary programs, 20 and enhancing allocative efficiency by realigning budgets around policy priorities and ultimate effectiveness (KIPF 2020a). The National Finance Act mandates an M&E system for budgetary non-R&D policies that covers the policy life cycle, as shown in figure 3.6. Of the specific M&E measures in figure 3.6, this case study delves into three of the most fundamental measures particularly relevant for innovation policy in the developing country context (Park 2007):21 (1) performance goal management sys- tem, (2) self-evaluation by line ministry, and (3) ex post in-depth evaluation. Table 3.2 summa- rizes key information about these three measures. Just as in the case of budgetary R&D policies, evaluation results are used to make adjustments to program budgets (Kang et al. 2018). 19 국가재정법. 20 Key information includes a budgetary program’s (1) composition and spending per objective, (2) goals and level of achievement, and (3) beneficiaries and responsible entities. 20 Key mechanisms for improving technical efficiency are identifying problems, allowing feedback, and adjusting how programs are implemented. 21 Confirmed in interview with KIPF, November 18, 2020. 43 Innovation policy Learning from Korea: the case of monitoring and evaluation (M&E) FIGURE 3.6. Framework for M&E of Budgetary Non-R&D Policies Responsible government Formulation Implementation Termination body Ministry of Economy and Performance Plan (submitted Performance Report Finance (Fiscal Management every year) (submitted every year) Bureau) Ministry of Economy and Ex-ante Feasibility Study for Core Program Evaluation (on- Funds/Subsidies/Charges Finance (Fiscal Management large-scale programs site monitoring for 3 years) Evaluation Bureau) Ministry of Economy and 5-Year National Financial Finance (Financial Innovation Management Plan (updated In-depth Program Evaluation Bureau) every year) Self-evaluation of Budgetary Line ministries Program Source: Original figure for this publication, based on Chang 2020. Note: M&E = monitoring and evaluation; R&D = research and development. TABLE 3.2. Three Major M&E Mechanisms for Budgetary Non-R&D Policies M&E mechanism Key facts Performance goal management • System was adopted in 2003. system • M&E of ministries’ budgetary programs are based on their annual performance plans and performance reports. Self-evaluation of budgetary • Checklist-based self-evaluation was adopted in 2005. programs • In principle, every budgetary program is subject to the line ministry’s self-evaluation every three years. • Evaluations are based on checklists. In-depth program evaluation • In-depth program evaluation was adopted in 2006. • Programs that need improvement based on self- evaluations are subject to in-depth evaluations. Source: Original figure for this publication, based on NABO 2014, 96. Note: M&E = monitoring and evaluation; R&D = research and development. 44 Innovation policy Learning from Korea: the case of monitoring and evaluation (M&E) Performance Goal Management System The purpose of the performance goal management system is to ensure the accountability of programs in the government’s budgetary process. The system was adopted in 2003 to increase efficiency in public expenditures and promote transparency and accountability it requires ministries to produce annual performance plans and performance reports for their own programs. The idea of linking budget allocation and management with performance- based budgeting was a worldwide trend at that time (Moynihan and Beazley 2016; Robinson and Brumby 2005). Starting with early efforts in the United States in the 1960s, performance- oriented budget reforms were introduced in Australia, New Zealand, the United Kingdom, and the Netherlands in the 1990s. The adoption of the performance goal management system in Korea represents the government’s transition from an input-focused public management system to a performance-based one which aligned to the global trend. In principle, all budgetary programs are subject to this performance goal management system. Prior to the fiscal year the line ministry responsible for the program submits an annual performance plan, which should include the program’s strategic goals, performance indicators, as well as targets for each performance indicator. After evaluation, the ministry produces an annual performance report, which should include performance results, processes of achieving performance goals, and plans for future implementation. Later, when the program reaches its termination stage, these goals, indicators, and targets are used to evaluate the performance of the program. The MOEF supervises the entire process and provides detailed guidelines that ministries should follow. The detail procedure is explained further in appendix A. Annual performance plans and reports are reviewed not only by the MOEF but also by the legis- lative body, mainly for the monitoring purposes. The law requires the submission of annual per- formance plans as attachments to the national budget bill to the National Assembly.22 Annual performance reports are required to be submitted to the National Assembly as part of the report of settlement of accounts. 23 Performance information is used by the MOEF and individual minis- tries to inform their budget allocation and program improvements, under the overall supervision and guidance of the MOEF. Self-Evaluation of Budgetary Programs The self-evaluation of budgetary programs was first introduced in 2005 by adapting the American Program Assessment Rating Tool (Kang et al. 2018; NABO 2020). In principle, by law, every budgetary program is subject to this evaluation every three years.24 The approach closely mirrors 22 National Finance Act, art. 34, para. 8 23 National Accounting Act, art. 14, para. 4 24 National Finance Act, art. 8, para. 6; Enforcement Decree of the National Finance Act, art. 45 Innovation policy Learning from Korea: the case of monitoring and evaluation (M&E) the self-evaluation framework for national R&D programs, given that line ministries oversee the evaluations of their respective programs and are responsible for proposing performance indicators and targets. In addition, evaluation committees are formed using prescribed guidelines—from the MOEF, in the case of budgetary non-R&D programs—to ensure independent and objective assessment. When a program turns out to need improvements based on self-evaluation results, the line ministry can introduce adjustments to the program’s budget, such as budget cuts, or to its program structure, such as a consolidation with another program. As with other M&E measures in place, evaluation results are made publicly available for the transparency of financial operations and the effectiveness of self-evaluations. Appendix B shows a detailed timeline of the self-evaluation of budgetary programs. with other M&E measures of Korea, the primary goal was to increase efficiency in operating public resources by closely monitoring and evaluating the performance of budgetary programs, supported by a legal foundation.25 Line ministries’ self-evaluations were the major M&E tool until the mid-2000s, and limitations existed on deriving in-depth insights regarding the performance of individual programs. These limitations led to the introduction of the in-depth program evaluation, which subjects certain budgetary non-R&D programs to more stringent assessments. From 2006 to 2009, individual programs were evaluated. In 2010, a major change was made to the evaluation plan, which changed the unit of analysis to program portfolios; several programs with similar policy objectives or target beneficiaries began to be grouped together into program portfolios, whose performance is evaluated to produce comprehensive improvement strategies. From 2005 to 2018, 112 in-depth program evaluations were conducted (Kang et al. 2018). Annually, about 10 in-depth evaluations are performed.26 Evaluations are conducted by KIPF under the overall supervision of the MOEF. Programs subject to this evaluation include the following, as prescribed in the law:27 • Programs that require additional evaluation based on the results of the self-evaluation of the budgetary program • Programs with the possibility of misspending public resources due to either redundancy or inefficiency • Programs whose budgetary expenditure is expected to increase significantly • Other programs that require in-depth analyses of performance As shown, the law is limited to providing criteria for selecting programs to be evaluated in-depth. The Ministry of Economy and Finance (MOEF) possesses the authority to select specific program portfolios for in-depth evaluation, guided by established guidelines that direct the selection process. The MOEF’s selection also requires deliberation by the Public Finance Program Evaluation Advisory Council, composed of government officials from various ministries, and external experts. 25 National Finance Act, art. 8, paras. 6 and 8; Enforcement Decree of the National Finance Act, art. 3. 26 To put it into perspective, annual performance reports are written on 1,800 programs every year (Chang 2020). However, in-depth program evaluations are conducted not only on individual programs but also on groups of programs. 27 Enforcement Decree of the National Finance Act, art. 3. 46 Innovation policy Learning from Korea: the case of monitoring and evaluation (M&E) Once the selection is completed, KIPF conducts evaluations with support from academics scholars and experts from research institutes. Both quantitative and qualitative methods are used to analyze the effectiveness of programs. Evaluation results are used to adjust program budgets and improve relevant budget operation systems. Program portfolios are evaluated based on the following five criteria: • Relevance: Is it appropriate for the government to pursue this program? • Effectiveness: Has the program achieved the program-specific goals and strategic sectoral goals? • Efficiency: Were the inputs efficiently utilized to produce the outputs and intermediate results? • Utility: Has the demand of the program been satisfied? If so, to what extent? • Sustainability: If the program is terminated, how long will its positive effects endure? Use Case: M&E of R&D Tax Incentive Plans in Korea This section illustrates how innovation policy is monitored and evaluated in practice in Korea, featuring the R&D tax incentives (RDTIs) as an example. technical and descriptive details will be presented where necessary. Although this chapter discusses the specific M&E mechanisms for RDTI plans, such arrangements are representative of the general structure and level of rigor of the M&E frameworks for other areas of innovation policy in Korea. The goal of this section is to share operational knowledge with practitioners of innovation policy in developing countries as- piring to reform their own M&E system. The long history of RDTIs make them a good example to show how the country’s M&E system is deployed in practice (box 3.2). The Korean government has actively used tax incentives to promote firms’ R&D activities. The history of the incentives goes back to the early 1960s (Noh and Lee 2014), and constant adjustments have been made to improve their performance. Furthermore, RDTIs take up a large portion of the Korean government’s total tax expenditures, with the largest RDTI plan being one of the top five tax incentive plans by the amount of expenditure consistently in recent years (MOEF 2020). It is worth noting that there is a separate M&E system for RDTIs because of the large size of their budget. Although this system differs slightly from the M&E systems introduced earlier in this chapter, the overall structure and mechanisms remain largely the same. 47 Innovation policy Learning from Korea: the case of monitoring and evaluation (M&E) Advancement of Korea's M&E Mechanisms for RDTIs Until 2012, self-evaluations by line ministries were widely used for monitoring RDTI programs in Korea. These self-evaluations, were complemented by specialized evaluation reports of tax expenditure for RDTIs done by financial authorities and scholars, especially for programs with substantial resource allocation. First adopted in January 1999, the self-evaluations were a prominent M&E mechanism required by law. In-depth evaluations were introduced by law in 2013, when they were gradually put into effect. However, their full implementation did not occur until 2015. Since self-evaluations were not carried out by a specialized and accredited institution, they were largely a self-regulating, less-stringent mechanism for implementing the M&E of line ministries. This remains true to a large extent when compared to other M&E mechanisms subsequently adopted. Scholars and researchers at think tanks complemented the self- evaluations since their adoption and continue to do so. Starting with full implementation in 2015 (but introduced by law in 2013), Korea’s M&E measures for RDTI programs evolved to become more thorough in their scope and depth (NABO 2017). To enable informed decision making in adopting new large-scale RDTIs, ex ante feasibility studies were introduced by MOEF. Any newly proposed RDTI plans with an expected annual tax expenditure of ₩30 billion (US$22 million) need to justify why and how the proposal would lead to net benefits to society. To tighten M&E of certain RDTI programs that require more stringent evaluations of their impact on the economy and society in general, mandatory in- depth evaluations began to be required based on the Restriction of Special Taxation Act in 2013. Additionally, the act provided a legal basis for the optional in-depth evaluations of certain RDTI programs not covered by the mandatory in-depth evaluations. Such plans include those that cannot be monitored and evaluated through individual ministries’ self-evaluations either because of the high level of complexity of the programs or the close interrelatedness of RDTIs that involve multiple government bodies. The self-evaluations were also strengthened in the law of 2013 but not fully implemented until 2015. More detailed guidelines were introduced to improve the completeness and consistency of the individual ministerial self-assessments. More specifically, performance indicators were standardized to determine with greater "objectivity whether an RDTI plan has achieved its policy goal and to allow comparisons. Furthermore, data regarding policy beneficiaries had to be reviewed more thoroughly to ensure that target beneficiaries were appropriately selected and that they clearly benefited from the program. Complementary relations with other tax expenditure programs and fiscal policies were also reviewed. As a result of these changes, the 48 Innovation policy Learning from Korea: the case of monitoring and evaluation (M&E) performance and validity of RDTI programs were reviewed more comprehensively. In turn, the MOEF could secure more and better-quality data for policy making than before. Enhanced by these improvements, the self-evaluations now serve as a mechanism for responsible management of RDTIs on the front line of evaluation and provide a basis for the in-depth evaluations. Despite limitations (Kim 2017; Kim 2019), the self-evaluations represent a flexible and cost-effective means of accumulating the high-quality information that informs effective operational decisions. Although not as sophisticated or involved as the more stringent M&E mechanisms, the law details the dimensions of evaluation and other information required of the line ministries’ self-evaluations. The MOEF uses these data collected from line ministries for its high-level evaluations, through which it decides whether to support or drop an RDTI plan and develops strategies to make improvements in the design of such plans as they choose to retain. Ex ante feasibility studies and ex post in-depth evaluations are conducted only when large-scale plans are involved, as explained previously. In the case of the feasibility studies, typically fewer than five are conducted per year for new special taxation proposals that cover a wide range of subjects, one of which is R&D. As of July 2021, the most recent ex ante feasibility study conducted on R&D activities was the 2017 study on a program that aimed to introduce tax credits for small and medium enterprises’ expenses incurred when applying for or registering patents. The program was proposed by the Korean Intellectual Property Office. The feasibility study conducted by KIPF (2017) found that although the program would be justified in the three pillars of evaluation—conformity with public interest, economic feasibility, and fairness—to a certain degree with its positive effects expected, it was clear that it would not achieve its intended policy goals. Based on this analysis, the study recommended not adopting the program, which ultimately led to the rejection of the proposal. Overall, the improvements made to Korea’s M&E system for RDTIs have advanced the institutional foundation for effective M&E (Kim 2019) and enhanced government capabilities and evidence-based policy making (NARS 2018). In Korea, key M&E mechanisms for RDTIs are legally required by the Special Taxation Act, which mandates line ministries and qualified research institutions to conduct M&E and report to the MOEF. The self-evaluations that are carried out by line ministries responsible for the implementation of RDTIs have long been used as a key M&E mechanism. Since 2015, ex ante feasibility studies and ex post in-depth evaluations have been used for more thorough and systematic management of RDTI plans. These studies have been mainly led by the KIPF and Korea Development Institute (KDI), as articulated by the Enforcement Decree of the Special Taxation Act 28 (the enforcement decree, hereafter). 29 28 Article 135-2 of the Enforcement Decree of the Restriction of Special Taxation Act. 49 Innovation policy Learning from Korea: the case of monitoring and evaluation (M&E) The overarching goal of Korea’s M&E mechanisms for RDTIs is to enhance their effectiveness and efficiency, as well as the overall performance of the taxation system (MOEF 2020). Managing RDTIs is particularly important in Korea given the sheer magnitude and increasing trend of tax expenditures for R&D activities. The amount of tax expenditures for R&D increased from ₩2.6 trillion (US$1.9 billion) in 2019 to an estimated ₩3.0 trillion (US$2.2 billion) in 2020 (MOEF 2021). In 2019, tax expenditures for R&D were equivalent to 5.3 percent of the country’s total tax expenditures in that year, which was ₩49.6 trillion (equivalent to USD 37 billion). To manage RDTIs, the three mechanisms of Korea’s RDTI M&E system are designed to cover the entire life cycle of RDTI plans, as shown in figure 3.7. Each of these are discussed in more detail in the following sections. 29 Both KIPF and the KDI are part of the National Research Council for Economics, Humanities, and Social Sciences, a public entity under the prime minister. Established by the Act on Establishing, Operating, and Fostering of Government-Funded Research Institutes, the National Research Council for Economics, Humanities, and Social Sciences is tasked with supporting government-funded research institutes in the areas of economics, humanities, and social sciences so that such institutes can better contribute to Korea’s research community and knowledge industry. 50 Innovation policy Learning from Korea: the case of monitoring and evaluation (M&E) FIGURE 3.7. Overview of the Monitoring and Evaluation System for R&D Tax Incentives Design Implementation Evaluation Policy life cycle Ex-ante Feasibility Study Self-Evaluation Ex-post In-depth Evaluation • Tax Expenditure Proposal: New R&D tax • Mandatory in depth Evaluation:RD Tax incentive New R&D tax incentives with an expected incentives schemes Subject Subject Subject schemes facing the end of their sunset period, tax expendure of KRW 30 billion or more • Tax Expenditure Evaluation Report Existing with a tax expenditure of KRW 30 billion or more R&D tax incentive schemes listed in the • optional in depth Evaluation: When necessary basic plan for special taxation (selection criteria specified in law) Proposed new R&D tax incentive scheme’s • Clarity of policy goal • Necessity • Rationality of performance indicators and • Effectiveness (level of target achievement Coverage • Timeliness targets economic effects, Income redistribution • Expected Outcome • Relevance of target beneficiaries effects) • Anticipated challenges Coverage • Whether performance targets have been Coverage • Validity (relevance of policy gial, target archieved beneficiaries, and method) • Economic effiency snf fairness • Areas for improvement (impediments to • Complementary relations with other tax intended performance and ways to address • Selection of the core subject of expenditure schemes and fiscal policies them) evaluation by the MOEF Led by • Feasibility study by specialized research • Self-evaluation: Individual ministries • Self-evaluation: Individual ministries institution • Completeness check: Ministry of Economy • Completeness check: Ministry of Economy Led by Led by and Finance, Korea Institue of Public Finance and Finance, Korea Institue of Public Finance • Tax Reform Proposal • Tax Expenditure Budget Report Source: Original figure for this publication, based on KIPF 2020b Note: R&D = research and development. 51 Innovation policy Learning from Korea: the case of monitoring and evaluation (M&E) Ex Ante Feasibility Study Ex ante feasibility studies are required for new RDTI programs that are expected to incur a tax expenditure of 30 billion (approximately US$ 25 million) or more.30 The act articulates the requirements of the feasibility study in detail, including the dimensions of evaluation and submission deadlines. The study covers the validity, appropriateness, expected effects, and potential challenges of the proposed new RDTI program to objectively decide whether it should be introduced.31 Research institutes specializing in policy evaluation carry out these studies. KDI and KIPF were officially designated as qualified research institutions to conduct ex ante feasibility studies.32 Appendix C provides a detailed timeline of the ex ante feasibility studies. In terms of methodology, the operational guidelines for ex ante feasibility studies of special taxation provide details.33 The three pillars of analysis are (1) conformity with public interest, (2) economic feasibility, and (3) fairness, and both quantitative and qualitative research can be are conducted. To analyze a special taxation plan in the three dimensions, but In addition, a cost-benefit analysis must be performed when assessing economic feasibility. After the analysis in each pillar is complete, the guidelines also require a comprehensive evaluation to be conducted Analytic Hierarchy Process (AHP)34 The research institute conducting the study makes suggestions based on evaluation results, including on whether the program should be adopted and how its design can be improved. While the government is not obligated to base its decisions on the evaluation results, the ex ante feasibility study acts as a critical checkpoint that any large-scale special taxation program must pass before it can be implemented. 30 Based on the Restriction of Special Taxation Act, art. 142, para. 5, and the Enforcement Decree of the act, art. 135. 31 The Enforcement Decree of the Restriction of Special Taxation Act (art. 135) provides more details about the required dimensions of evaluation, which include (1) the validity of policies, including the necessity and timeliness of the special taxation, expected effects, potential problems, and methods for support; (2) the impact on various areas of the economy, including employment and investment; and (3) the impact on the redistribution of income in various areas of society, such as families, enterprises, and local communities. 32 The Enforcement Decree of the Restriction of Special Taxation Act (art. 135, para. 2) states that the Minister of Economy and Finance may designate any of the following institutions as an institution to conduct specialized surveys and research: (1) The Korea Institute of Public Finance, (2) the Korea Development Institute, and (3) other institutions that the Minister of Economy and Finance recognizes as having specialized human resources, capacity to conduct surveys and research, and so forth, regarding the assessment of special taxation programs. Through an interview with KIPF conducted on November 18, 2020, the project team learned that the KDI has traditionally been conducting the ex ante feasibility studies relating to special taxation. 33 Articles 24, 25, 26, 27, 28, and 29. 34 The analytical hierarchy process, first developed by Thomas Saaty (1977) in the late 1970s, is a decision-making model for establishing priorities in multicriteria decision-making. It enables decision-makers, including policy makers, to select a solution from alternatives based on a number of evaluation criteria and pairwise comparisons. The three central elements of the process are (1) identifying and organizing decision objectives, criteria, constraints, and alternatives into a hierarchy; (2) evaluating pairwise comparisons between the relevant elements at each level of the hierarchy; and (3) synthesizing using the solution algorithm of the results of the pairwise comparisons over all the levels (Saaty 1988). The process is one of the most widely used multicriteria decision-making methods for complex scenarios. 52 Innovation policy Learning from Korea: the case of monitoring and evaluation (M&E) Self Evaluation by Line Ministry The self-evaluations are conducted by line ministries based on the following characteristics35: (1) programs that are facing the end of their sunset period, (2) programs within two years of implementation, (3) programs that are proposing to expand their scope or coverage, and (4) programs that need to be reviewed based on the annual master plan for special taxation and restrictions.36 According to the Special Taxation Act Enforcement Decree, the tax expenditure evaluation reports, which are prepared by individual line ministries to assess the effects of their own RDTI plans, are submitted by April 30 each year.37 This deadline allows the MOEF to review and enhance the quality of self-evaluations in coordination with line ministries over the subsequent five months until the final versions are produced in September. The reports cover the effects of tax relief and the responsible ministry’s suggestion as to whether an RDTI should be retained, abolished, or expanded. Furthermore, the enforcement decree grants central ministries the authority to propose tax expenditure plans for new Research, Development, and Technology Innovation (RDTI) programs. These proposals include their evaluations of the anticipated policy effects of such tax incentives, projections of annual tax expenditures, and relevant statistical data to justify the need for such fiscal incentives in the effective execution of economic and social policies. The proposals must be submitted to the Minister of Economy and Finance by April 30 each year. The self-evaluations, which encompass both the tax expenditure evaluation reports and tax expenditure proposals, are carried out to ensure the responsible management of RDTI programs by line ministries. The reports and proposals submitted by line ministries to the MOEF are checked for completeness and integrity with respect to all necessary proposal requirements. Given the sheer number of tax expenditure items in place and the required expertise, the MOEF assigns this task to KIPF, which can better manage the process. After the evaluations are submitted to the MOEF and are subsequently handed over, KIPF conducts a round of completeness checks by May 31 each year. From June to August KIPF and line ministries exchange feedback to further improve and complete the submitted r eports and proposals. Such M&E results are reflected in the MOEF’s Tax Reform Proposal—an annual proposal containing suggested changes to existing tax laws—and Tax Expenditure Budget Report—an annual report containing an analysis of the performance of special taxation programs and their tax expenditures—which are submitted to the National Assembly for its deliberation in September. The methodology used for the self-evaluations is less rigorous when compared with methods for the ex-ante feasibility studies and ex post in-depth evaluations. The evaluations are limited to describing the effects of tax expenditures based on performance indicators and with data readily available inside the responsible ministry. KIPF’s completeness checks are based on a checklist and mainly serve to verify whether and to what extent the evaluation was conducted with sincerity by the relevant line ministry38 35 Its legal basis includes the Restriction of Special Taxation Act, art. 142, paras. 1, 2, 3, 7, and 8; Enforcement Decree of the act, art. 135, paras. 1 and 10; Enforcement Decree of the act, arts. 13–2 (KIPF 2020b) 36 Based on the Restriction of Special Taxation Act, art. 3, and the Enforcement Decree of the act, art. 135, para. 1. 37 Article 142, para. 3. 38 Article 142, para. 2. 53 Innovation policy Learning from Korea: the case of monitoring and evaluation (M&E) Ex Post In-Depth Evaluation The primary goal of the ex post in-depth evaluations is to determine whether an RDTI program has achieved its goal and intended economic impact (MOEF 2020). This in turn helps the National Assembly decide whether the RDTI program should be extended, scaled down, expanded, or abolished, thereby minimizing the risk of inefficient use of public resources and capture by special interests and promoting efficient use. Figure 3.8. shows how this evaluation plan is structured. FIGURE 3.8. Framework of the Ex Post In-Depth Evaluation for RDTI programs Overview of the R&D tax Incentive scheme and of Issues Analysis of Areas for Analysis of Validity Improvement Relevance government role and of Economic impact(on production, Analysis of impediments to the design government role and investment, employment, income, Intended performance and ways to of the design and implementation income redistribution, and public address them, based on the reviews of the scheme finance) of validity and effectiveness Comprehensive Evaluation and Policy Recommendations For decision to extend the sunset period or repeal the scheme Source: Original figure for this publication, based on KDI 2018a. Note: R&D = research and development; RDTI = research and development tax incentive. The ex post in-depth evaluation consists of two categories: The mandatory in-depth evaluation and the optional in-depth evaluation. The former is conducted on large-scale RDTI plans with a tax expenditure of ₩30 billion (approximately US$25 million) or more that will reach the end of their sunset period in that calendar year. The latter is performed on certain RDTI plans prescribed in the act (KIPF 2020b). The evaluations cover a range of topics including the level of target values for achievement, economic effects, impact on income redistribution, and fiscal implications for the treasury. As noted earlier, these evaluations are required by the law that also designates qualified research institutions to perform them. 39 The Operational Guidelines for Ex Post In-Depth Evaluations specifies what an in-depth evaluation should analyze and how. 40 The three pillars of analysis 39 Restriction of Special Taxation Act, art. 142, para. 4. 40 Restriction of Special Taxation Act, arts. 17, 18, 19, 20, 21, and 22. 54 Innovation policy Learning from Korea: the case of monitoring and evaluation (M&E) are (1) effectiveness, (2) validity, and (3) areas for improvement. Although specific research methods may vary depending on the subject of the evaluation, both quantitative and qualitative research using surveys and data are conducted to analyze a special taxation plan in these three dimensions. The guidelines provide more detailed requirements for evaluation in each of t he three pillars. For example, for the analysis of effectiveness, the guidelines require an analysis of changes in the behavior of beneficiaries and effects on investment, employment, and income redistribution, among many other aspects. The research institute responsible for the evaluation selects the analysis method deemed most appropriate for the particular subject under examination. Based on analysis of results, the research institute makes suggestions, including on whether the plan in question should be extended, abolished, or improved, and if the latter, how it should be improved. The evaluations should be submitted to the National Assembly no later than 120 days before the beginning of each fiscal year, to give enough time for deliberation by the National Assembly (KIPF 2020a; ). 41 As with the other M&E mechanisms, it is not mandatory for the government to act upon the suggestions. However, government decisions are generally made in line with the recommendations from the evaluations. Box 3.2 Impact Evaluation of Innovation Policy in Korea Impact evaluation of innovation policy is not widely adopted in Korea. Although the ex-post in-depth evaluations could be considered a type of impact evaluation, as they attempt to comprehensively assess the impact of a policy using scientific methods, limitations exist. The same limitations generally apply to ex-ante feasibility studies as well. First, research methods used to evaluate impact vary in their degree of rigor. Not all ex-post in-depth evaluations use counterfactual scenarios, and therefore, causality is not always clearly identified. In fact, the Operational Guidelines for Ex-Post In-Depth Evaluations of Budgetary Programs do not require the use of a counterfactual. Second, ex-post in-depth evaluations are rarely conducted for innovation-support programs. Every year, about 10 ex-post in-depth evaluations are conducted for budgetary programs spanning all policy areas (Kang et al. 2018). Historically, programs related to welfare, labor, and public health have been most frequently evaluated. There are discussions about adopting more experimental evaluations. For instance, a Korea Development Institute (2018) proposed designing social policy pilot projects as randomized controlled trial studies to evaluate their effectiveness more thoroughly and scientifically. Source: Original for this publication. 41 Based on the Restriction of Special Taxation Act, art. 142, para. 4. 55 Innovation policy Learning from Korea: the case of monitoring and evaluation (M&E) Concluding Remarks Over the past few decades, through multiple phases of evolution, learning, and improvements, Korea has made achievements in the field of innovation policy, including monitoring and evaluation. As demonstrated in this chapter, Korea has managed to address several fundamental issues that align with international good practices in the M&E of innovation policy (reviewed in chapter 2), paving the way for increasingly more advanced and sophisticated procedures and techniques. Korea’s strong legal frameworks for innovation policy enabled close coordination among line ministries and implementing agencies and contributed to the long-term sustainability of the M&E system. On M&E of budgetary R&D policies, the country enacted the Performance Evaluation Act and Management of National Research and Development Projects and its Enforcement Decree. The M&E of budgetary non-R&D programs is grounded in the National Finance Act and its Enforcement Decree. Based on the acts and enforcement decrees, the MSIT and MOEF develop operational guidelines that provide even more detailed rules and procedures guiding government officials’ day-to-day operations. While the acts ty pically remain consistent, amendments and adjustments to the enforcement decrees and operational guidelines are frequently made to refine the M&E mechanisms. Korea’s consideration of Transparency and accountability, implemented through several practical mechanisms have led to more autonomy and responsibility in line ministries. First, most M&E results are required by law to be disseminated through the National Science and Technology Information Service (NTIS), an inter-ministerial knowledge portal for science and technology information. The NTIS serves as a one-stop platform for M&E results, along with other types of STI information, and is open not only to government officials but also to the public, including researchers, academics, and students. Second, by building in multiple layers of supervision, Korea has been able to hold line ministries accountable. For instance, line ministries’ self- evaluations are subject to review (higher-level evaluations) by the MSIT in the case of budgetary R&D policies and by the MOEF in the case of budgetary non-R&D policies. In addition to this double- layered M&E supervisory structure, in-depth evaluations are performed on certain programs and projects that are determined to need a closer inspection based on the results of self-evaluations and on the reviews by the ministries. Third, several mechanisms were put in place to overcome the limitations of self-evaluations. For instance, the MSIT acknowledges achievements from R&D programs and projects only if they are explicitly covered in self-evaluations submitted by line ministries, so the ministries have a strong incentive to pay attention to the quality of their self- evaluations. To enhance the objectivity and independence of self-evaluations, relevant laws and 56 Innovation policy Learning from Korea: the case of monitoring and evaluation (M&E) guidelines require the participation of external experts from academia, research institutes, and the private sector in conducting self-evaluations. Korea’s innovation policy features mutual trust between the MOEF and MSIT and the evaluated entities, which led to streamlining M&E requirements and allowing for exemptions as needed. This flexibility has encouraged a turn away from short-termism in innovation policy. This shift toward a long-term perspective, to an extent, promoted an increased acceptance of riskier innovation policies and creative M&E measures such as the Core Program Evaluation Scheme, which was adopted in 2018 to enable swifter theme-based M&E of policies of national priority (Chang 2020). Still, as with other countries, Korea faces its own challenges and limitations in this area: First, both KISTEP and KIPF are funded by the government and hence not entirely independent from the state. This is not necessarily a drawback since, as noted in the international experiences chapter, different arrangements (fully independent evaluators and in-house evaluators versus hybrid models combining both in-house and external evaluators) have their own pros and cons. Korea has put several measures in place to render evaluations conducted by the research institutes relatively free from the political influence of interest groups, including the government, by strictly mandating procedures to form evaluation committees and open access of evaluation reports. Nevertheless, this lack of full independence does pose a risk to the objectivity of M&E exercises and results. Second, line ministries tend to overrate the level of achievement when self-evaluating their innovation support programs, thereby harming the reliability of self-evaluations. For instance, a report by KISTEP (2018) points out that in the case of R&D programs, the average performance assessments from self-evaluations consistently reported positive results when compared to those from high-level evaluations, which were conducted by the MSIT, from 2008 to 2017. Similarly, in the case of self-evaluations of RDTI programs, ministries reported significantly higher performance scores when compared to the scores from KIPF’s independent reviews (Kim 2019). Third, the M&E primarily focuses on analyzing innovation inputs—for example, innovation investments—and the scope of evaluations centers on efficiency. The Korean M&E plans seldom capture the efficiency of these investments and innovation activities, limiting the potential to equate returns with the expenditures that went into generating them. Chang (2020) points out that part of the problem stems from the fact that evaluation plans anchor their unit of analysis to budget allocation, without regard for either the program function or its scope, which typically represent a more insightful unit of analysis to capture positive externalities under the M&E of innovation policy. This is particularly relevant, as the benefits from innovation policy are realized through knowledge spillovers from R&D and non-R&D investments. Fourth, the value of learning and adaptation based on M&E results needs to be strengthened. For a large portion of policy makers’ routine operations, M&E tends to be perceived as a mandatory box- checking exercise rather than an opportunity for learning and adaptation (Chang 2020). M&E is still seen by many as a compliance exercise rather than as an activity for performance improvement, and the transition away from short-termism still has a long way to go. In addition, the link between M&E results and budget allocation is not entirely clear (Kang et al. 2018). Relatedly, there are 57 Innovation policy Learning from Korea: the case of monitoring and evaluation (M&E) debates on whether results obtained from current M&E plans can serve as the basis for decisions about budget adjustment, given their limited evidence base and objectivity (Chang 2020). Fifth, the rigidity of the unit of analysis prevents policy makers from understanding the effectiveness of innovation-support programs more thoroughly. The unit of analysis of the M&E plans is typically fixed to an individual program, project, or institution. Although Korea started grouping programs to evaluate their collective performance in 2018, the country has yet to reap meaningful lessons from the wider scope of the evaluation plan. This is especially relevant for innovation policy, where the complementarity of factors such as research skills, laboratory infrastructure, or financial capital are all necessary for successful returns on innovation activities. The impacts of policy mixes as well as more nuanced attribution to individual policy measures need to be appreciated. 58 Innovation policy Learning from Korea: the case of monitoring and evaluation (M&E) 04 Lessons and Takeaways for Developing Countries 59 Innovation policy Learning from Korea: the case of monitoring and evaluation (M&E) Previous chapters have profiled principles of good practice in the M&E of innovation policy and provided an account of Korea’s M&E systems. This chapter draws lessons for the present study’s client countries—Indonesia, the Philippines, and Vietnam—based on the discussion and documentation of the Korean and international examples and concludes with key takeaways for client countries. The following sections discuss the key lessons in more detail based on the proposed categorical framework in chapter 2: Governance, Data and Methods, and Capacity and Resources. Governance Situation in Client Countries In the three client countries, M&E for most of the programs reviewed is mandated by law. However, reviews of innovation programs revealed a lack of systematic adoption of M&E frameworks. For instance, the Policy Effectiveness Reviews (PERs) for innovation policy conducted as part of the World Bank’s previous advisory engagements with the client countries revealed that monitoring of inputs and outputs was conducted for a few programs, but practitioners did not capture specific outcomes using standardized methods and indicators. Furthermore, M&E frameworks with measurable indicators were largely incomplete for the sample innovation policy programs reviewed. The review of M&E systems that were applied to these programs found that a lack of clear definitions of inputs, activities, outputs, impact, and external conditioning factors was likely to constrain policy makers’ decision to conduct timely program adjustments. Even for those programs with an M&E framework in place, program targets and outcomes were often disconnected, showing that the lack of a logical framework or theory of change can significantly constrain implementation effectiveness. Weak mechanisms for learning during or after implementation represent a missed opportunity to introduce systematic course correction. The lack of a systematic effectiveness assessment after the termination of a program was another limitation commonly present in the program practice of the three client countries. Lessons from Korea Korea’s well-articulated use of mandated M&E frameworks can be instructive for client countries. The legal basis requires the use of M&E frameworks, defines roles and rules explicitly, and delegates authority to promote not only accountability but also autonomy. In the case of budgetary R&D policies, the Performance Evaluation Act and its enforcement decree grant the MSIT authority to oversee the M&E of R&D policies and to designate specific agencies, including KISTEP, to support M&E activities. For the M&E of budgetary non-R&D policies, the National Finance Act and its enforcement decree give the mandate to the MOEF with support from designated agencies such as KIPF. The laws also require the disclosure of most M&E results to the public. By subjecting M&E results to the scrutiny of government offices and to that of the public, Korea’s M&E system leverages transparency to drive accountability and autonomy. 60 Innovation policy Learning from Korea: the case of monitoring and evaluation (M&E) Korea’s five-year master plans have been instrumental in ensuring a holistic and long-term approach to the M&E of innovation policy. As shown in chapters 3 and this chapter, Korean five- year master plans are developed to provide overall guidance on the M&E of innovation policy, and detailed M&E plans are conceived based on these master plans at the ministerial level. This approach ensures that the long-term perspective, adopted during policy formation, facilitates the development of an M&E system that is thorough and implemented across all levelsBecause most M&E data and results are made public through online platforms, policy makers and any interested parties may easily monitor and evaluate the performance of programs (based on long-term goals and indicators established before implementation) without the burden of a formal information request and approval process. Korea has found a balance in dividing the labor between R&D and non-R&D innovation policies among two major ministries—MSIT and MOEF—and this is well aligned with its overall structure of innovation policy making and implementation. As reviewed in chapter 2, advanced innovation systems such as those of Sweden and the UK typically feature an increasingly integrated approach to the M&E of innovation policy, which includes investment in both R&D and non-R&D activities; the integrated, one-stop approach can bring significant advantages in addressing complex, persistent or “wicked” problems that cannot be understood and addressed in isolation. Although Korea’s M&E for innovation policy diverges from that of some other advanced economies in that the Korean implementation divides its operation between R&D and non-R&D policies, the divisional arrangement is in line with its overall innovation system structure and thus far has served its purpose. The takeaway is that whether using separate M&E mechanisms or an integrated mechanism for R&D and non-R&D policies, developing country practitioners need to adopt an M&E system that best responds to their local contexts and the overall division of roles of their innovation systems. If a country does have a dedicated and empowered innovation agency in place already, then an integrated approach would be ideal. If a country also faces a deep-rooted division between R&D versus non-R&D innovation policies, then the Korean approach would offer more readily adoptable lessons in the near term. Korea has benefited greatly from a strong political drive and major investments in promoting M&E of innovation policy, from the very top-level leadership. The role of the Presidential Advisory Council on Science and Technology (PACST) was crucial in mandating the M&E of innovation policy in practice, and regular high-level meetings presided by the president provided real coordinating power. Coupled with this political commitment has been sufficient investment in the resources and capacity required, which is discussed in section 4.3. Without strong, top-level support and the investments needed, M&E mandates would unlikely be implemented in practice. 61 Innovation policy Learning from Korea: the case of monitoring and evaluation (M&E) Data and Methods Situation in Client Countries The PERs in the three client countries found that knowledge management and information sharing systems should be improved for more effective use of M&E in advancing innovation policy. One challenge was the lack of an integrate digital database that can systematically manage the applications to and beneficiaries of innovation programs in Indonesia, the Philippines, and Viet Nam. Further, M&E data could not be easily produced even during the review, revealing that the data were neither available under the proper classification nor disaggregated at the level required for basic analysis. In Indonesia, the authors found that M&E frameworks for innovation programs often cannot rely on standard data requirements due to budget constraints. Lack of cooperation in data sharing across implementing agencies was found to be another barrier to improving innovation policy M&E systems in these three countries. Information sharing is particularly important because data for innovation policy are typically managed by several government entities. In addition to the issue of fragmented data management, challenges related to evaluation itself should also be addressed in the client countries. The diagnostics undertaken reveal that the programs were hardly based on an identified market failure in the local context, and the process to bring new instruments appeared ad hoc in the absence of clear program origins. Weak problem identification can result in several methodological issues such as the absence of clearly identified, attainable policy objectives, measurable indicators of failure or success, and a lack of economic justification for the proposed instrument, all of which were prevalent in the three countries studied. This implies that there is little space for defining outputs and providing an indication of impact. Further, in monitoring and evaluating their programs, the client countries did not actively seek to utilize the expertise of research institutions or experts that specialize in M&E. When it comes to establishing clear program closure procedures, the PERs found that the Philippines and Vietnam did relatively well. In the Philippines, program closure mechanisms were found to be strong and, by law, most programs collected performance monitoring information from beneficiaries beyond the end of the support program. In Vietnam, most programs had a built-in expiration date and a requirement for evaluating its potential continuation. These are in line with internationally recognized best practices and are present in Korea. Lessons from Korea In Korea, along with their overall supervision mandates, the MOEF and MSIT are given the authority to make inf ormation requests to line ministries, as articulated in the aforementioned laws42. The laws detail circumstances under which the MOEF and MSIT can make requests and require line ministries to respond to them. This legal framework is particularly useful for securing information and data for M&E when the evaluating entity and the program-implementing entity are different. 42 Article 8 of the National Finance Act; Article 12 of the Act on the Performance Evaluation and Management of National Research and Development Projects, Etc. 62 Innovation policy Learning from Korea: the case of monitoring and evaluation (M&E) The Korea NTIS provides an excellent example for easing the access to information and data for M&E, particularly data for innovation policy that are often managed by several entities. As discussed in chapter 3, NTIS serves as a one-stop shop for M&E information and data that is open not only to government officials but also to the public, including researchers, academics, and students. M&E is a data-driven activity with shared responsibility across entities or offices, and its successful implementation carries with it a commitment to transparency and accountability. Therefore, the data on which M&E will be carried out should not have obstacles to their access and use. The nature of ease-of-access has to do with regulations and technical matters. Much needed data may originally be obtained in the form of reports from beneficiaries or the result of follow-up consultations and surveys. The data from all these procedures and sources must be routinely organized into consistent categories and indicators and cleaned for ease of use in M&E activities. In Korea, the qualification of program origins, including building its case and processing and analyzing M&E data, is closely supported by specialized research institutions with expertise in M&E. One of the M&E functions during the design stage is related to design choices such as the most appropriate intervention among several possibilities. This is a crucial step in what is often labeled ex ante evaluation. In a sense, the alternative candidate instruments for the planned intervention constitute counterfactuals for each other—a critical evaluation criterion in itself. In Korea, any large program would require a thorough analysis of the problem and an exercise to decide whether to launch the program based on a cost-benefit analysis. As described in chapter 3 and this chapter, sizable policies require evaluation using the analytic hierarchy process method43 Such evaluations are done by research institutes, which make recommendations based on evaluation results. The engagement of a specialized research agency can also contribute to adherence to a minimum level of rigor, which applies not only to program origin but also to the analysis of M&E data. Meanwhile, consultation with stakeholders helps define the specific aim of the policy, which is then factored into the theory of change and the items to observe and measure in the M&E process. For some evaluations, Korean laws provide autonomy and flexibility by deliberately not imposing specific evaluation methods. In such cases only areas of evaluation are specified, leaving the method of evaluation to the discretion of the entity charged with undertaking it. Given that certain methods of monitoring and evaluating innovation policy often have limitations, forbearing from universally prescribing specific M&E methods can be liberating. Another way to enable flexibility is to allow line ministries to develop program targets and measurable indicators for self-evaluations, as opposed to imposing centrally developed targets and indicators that do not consider the widely varied characteristics and local conditions of innovation support programs. To ensure the minimum quality of these performance targets and indicators and the rigorousness of evaluation methods, the MOEF and MSIT supervise the process and provide guidance to line ministries as needed. Clarifying program eligibility for applicants can lead to improved targeting of beneficiaries and, from the Monitoring and Evaluation (M&E) perspective, enhance both the quality and accessibility of necessary dataThe population of potential beneficiaries will translate into actual applications for support or participation in a program if the framework, rules, and criteria that define the 43 Article 50 of the Operational Guidelines for Ex Ante Feasibility Studies 63 Innovation policy Learning from Korea: the case of monitoring and evaluation (M&E) target population are understood and accessible to its members. The target population is the realm in which the impact, in the short and medium term, is expected to materialize. This is a critical domain on which the M&E program must be able to capture data. Establishing follow-up practices with beneficiaries aids in understanding the long-term impacts of innovation policy. Many of the important impacts of innovation policies occur after the projects or activities conducted under its purview have concluded. Important information will be available only sometime after the end of the beneficiaries’ participation. Therefore, a key good practice in M&E is a plan and method to follow up with beneficiaries after closure. Capacity and Resources Situation in Client Countries Prior analytical advice delivered by the World Bank found that client countries did not devote enough resources to developing capacity for the M&E of innovation policy. For example, in the Philippines, more staff training was found to be needed within institutions to enhance staff understanding of M&E and the use of logical frameworks in programs. In Vietnam, limited resources were found to be allocated to regular program monitoring. Further, for annually budgeted programs, uncertainty around budget amounts led to various types of risk that thwarted the effective M&E of policies. In Indonesia, programs were typically well funded, but not enough resources and considerations were given to staff members. Staff members were shared across multiple programs, which resulted in work overload that could compromise the quality of implementation. Staff assignments to programs were done without full consideration of competencies and capabilities, and large variations in staff skillsets compromised effective execution of programs. Among the various capacities needed for the M&E of innovation policy, the PERs found that coordination capacity should be enhanced in the client countries. The lack of systematic coordination activities, especially at the level of implementation—rather than management— resulted in several challenges that spun across the framework’s three dimensions. For instance, failure to closely coordinate across institutions hampered even basic data sharing, which in turn can significantly restrain opportunities for learning and course correction. Weak coordination capacity can also lead to a lack of consultation with stakeholders, crucial for securing accountability, objectivity, and needed expertise. The capacity to implement good practices of M&E requires significant resources both in terms of funding and skilled personnel. The perception that M&E is a nonproductive overhead component of policy must be superseded by a culture of professionalism and continuous improvement to satisfy the aim of transparency and accountability of the government activity. This requires the commitment of resources commensurate with such a declaration of purpose. 64 Innovation policy Learning from Korea: the case of monitoring and evaluation (M&E) Lessons from Korea Korea offers useful experience in fusing the requirements of staff training to the M&E of innovation policy itself. Mandatory and optional training programs specifically designed to strengthen government officials’ capabilities for M&E are offered by the supporting agencies, in addition to operational guidelines. For example, KIPF trains government officials on the formulation of M&E measures, such as methodology to set performance indicators. 44 Eligible policy makers must take a mandatory education program every year, to achieve certification, and an optional education program is available for high-ranking officials. Frequent training is important to the continuity of M&E work because government officials in Korea, as in many other countries, periodically rotate between official positions (Jung et al. 2010). Korea’s approach to enhancing coordination capacity among policy makers is firmly based on the legal framework. For example, in conducting self-evaluations of budgetary non-R&D innovation policies by line ministries, the National Finance Act and its enforcement decree provide mandates for the MOEF and line ministries to coordinate the setting up of performance targets and indicators, and for refining M&E results over a mutually agreed period of time. The laws and operational guidelines also require consultation with stakeholders, including reviews and verification of M&E results by external experts and research institutions specialized in M&E. Once the specifics of each policy are acknowledged, recognizing the peculiar demands for appropriate M&E schemes in each case, there is much commonality in needs, methods, and relevant skills that will benefit from a coordinated effort among all ministries and agencies. Coordination not only helps in achieving efficiencies by avoiding duplication of effort but also benefits the effort to develop useful, relevant, and accessible data since the common definitions and architectures are only achievable in the context of such coordination. Korea has leveraged another important factor in ensuring coordination and implementation, which is budgeting. MSIT and MOEF, the executive bodies of M&E of R&D and non-R&D, have the power to allocate the budget based on the results of M&E. It is important to note that budget allocation is not solely based on performance evaluation results because not all programs can be monitored and evaluated on an equal footing. That is, program objectives and circumstances are also considered in addition to efficiency and effectiveness. As such, the process may be relatively unclear to external observers as to exactly how the budget adjustments are executed based on M&E results. However, key informants of this study confirmed this process has been established in practice and has played a key role in ensuring that M&E results inform budget allocation. In terms of capacity, Korea’s expertise in carrying out the M&E of innovation policy was supported with research institutes specialized in M&E as well as academia, making it easier to ensure the leading-edge experts in the country are involved in the process. In the case of the M&E of budgetary R&D policies, KISTEP provides support to the MSIT. For the M&E of budgetary non-R&D policies, KIPF supports the MOEF. Both KISTEP and KIPF were established and qualified by law as research institutes to conduct official evaluations of government programs. Researchers at KISTEP and KIPF do not rotate every two to three years like government officials do within 44 Interview with KIPF, November 18, 2020. 65 Innovation policy Learning from Korea: the case of monitoring and evaluation (M&E) the government, enabling them to accumulate know-how in conducting government policy evaluations. The strong networks of experts and other KISTEP and KIPF research institutions also contribute to conducting the M&E of innovation policy, as external experts such as scholars in academia also participate in evaluations. Conclusion and the Way Forward This case study demonstrates that the good practice of M&E is never merely a technical matter; it is also a governance matter, as the requirement and support for M&E follow directly from the governance and accountability basis on which it will be built. Korea’s current M&E system is the result of decades of development and refinement. M&E programs for innovation policy require constant adjustments and improvements due to the fast-changing, multifaceted nature of innovation activities. Developing an M&E system like that of Korea or adopting certain features of it may necessitate a change in culture, organizational routines, and individual behavior. In undertaking the steps toward more adequate M&E to suit country-specific needs in innovation policy, prioritization is required and should be driven by the readiness of each country’s governance situation, capacity/resources levels, and technical capabilities. Developing countries could anticipate and be prepared for certain challenges in improving their M&E plans by referring to the Korean experience. The challenges that Korea is currently experiencing could be circumvented in developing countries if they implement countermeasures early on. While the case study points to several takeaways that could be adopted in client countries (see sable 4.1 for a summary of takeaways), many more details and nuances are needed to address different practical aspects, which will require specific knowledge-exchange activities to bring the Korean experience in closer alignment with client–country contexts. 66 Innovation policy Learning from Korea: the case of monitoring and evaluation (M&E) TABLE 4.1. Key Takeaways from the Republic of Korea for Developing Countries in Adopting the M&E of Innovation Policies Category Key takeaways from Korea Governance Embrace a holistic and long-term perspective. Key mechanisms include five-year master plans and, more recently, a Core Program Evaluation program that evaluates groups of closely related innovation policies (or policy mixes). Clear definition of ownership helps strike the balance between autonomy and compliance. “Self-evaluation” as the major M&E form serves as a low-cost, self-regulating mechanism that is marked by scrutinization by selective in-depth evaluation, links to budget allocation, and open access to information. Put in place strong mechanisms for learning during or after implementation to facilitate course correction. In practice, decisions are made in line with the recommendations from evaluations; publicity for evaluation reports and scrutiny from the National Assembly impose pressure to carry out course correction accordingly. Actively promote the use of M&E frameworks. Every budgetary program by the government is subject to an M&E plan, and M&E frameworks are widely used throughout the Korean government. Operational guidelines are in place to impose strict eligibility criteria and procedures. Data and Methods Open access to results in a public repository (such as NTIS) fosters transparency, accountability, and citizen answerability. NTIS serves as a one-stop shop for M&E information and data that is open not only to government officials but also to the public. Empowered with increasingly advanced IT technology and sophisticated user demand, NTIS-type platforms can be instrumental in catalyzing the enhancement of governance and capacity, going way beyond being merely a data tool. Develop M&E as part of policy design, considering alternative instruments and portfolio relationships. Ex ante feasibility studies could be conducted to evaluate whether the proposed policy intervention is the most appropriate among several possibilities. Meanwhile, program targeting should be enhanced to reach the intended population, an activity which could yield the right data and information for M&E purposes (among other uses). 67 Innovation policy Learning from Korea: the case of monitoring and evaluation (M&E) Use logical frameworks to ensure that program objectives, expected outputs, and impact are clearly defined and measured. Prior to the fiscal year in which a program is planned to be implemented, the line ministry responsible for the program submits an Annual Performance Plan, which should include the program's strategic goals, performance targets, performance indicators, and targets for each indicator. When the program reaches its termination stage, these goals, indicators, and targets are used to evaluate the program's performance. Establish clear program closure procedures and follow up with beneficiaries to understand the long-term impacts of innovation policy. The Korean government follows up with beneficiaries that participated in certain R&D programs five years after program closure to evaluate how R&D outputs have been used (such as in technology transfer, commercialization) and the impact that such outputs have generated. Capacity and Leverage different sources of expertise and stakeholder Resources consultations to secure the required expertise and know-how of cross-cutting innovation evaluations. he Korean government's M&E activities were supported by research institutes specialized in M&E as well as academia, ensuring that the country's M&E experts are actively involved in the process. Capitalize on M&E expertise and resources to offer capacity building opportunities for policy practitioners in general. The research institutes supporting the M&E plans of MOEF and MSIT provide the technical expertise and assistance in planning and implementing M&E. They also hold regular training sessions for policy makers. For example, KIPF offers several courses for policy makers (both working-level officers and high-ranking officials) so they can better understand the M&E system and prepare and implement their self-evaluations adhering to the operational guidelines. Establish mechanisms for close coordination among ministries and agencies. Korean acts that stipulate the M&E schemes provide firm legal basis for the MOEF and MSIT to act as powerful coordinators. Their enforcement decrees prescribe the authorities that the ministries have in detail. MOEF and MSIT publish and distribute the operational guidelines that detail procedures, timelines, and requirements of M&E so that line ministries can plan their evaluations accordingly. In addition, MOEF and MSIT interact with line ministries frequently to ensure the guidelines are being followed. Source: Original table for this publication. Note: IT = information technology; KIPF = Korea Institute of Public Finance; M&E = monitoring and evaluation; MOEF = Ministry of Economy and Finance of the Republic of Korea; MSIT = Ministry of Science and ICT [information and communication technology] of the Republic of Korea; NTIS = National Science and Technology Information Service; R&D = research and development. 68 Innovation policy Learning from Korea: the case of monitoring and evaluation (M&E) References 69 Innovation policy Learning from Korea: the case of monitoring and evaluation (M&E) References Aridi, Anwar, and Kapil, Natasha. 2019. International Perspective, edited by Hellmut “Innovation Agencies. Cases from Developing Wollman. Economies.” Washington, DC: World Bank 56–61. Northhampton, MA: E. Elgar Publishing. Group. Cirera, Xavier, and William F. Maloney. The Bae, Junghoe [배정회], Sunyang Chung [ Innovation Paradox: Developing-Country 정선양], and Jieun Seong [성지은]. 2014. Capabilities and the Unrealized Promise of “The Evolution of National R&D Performance Technological Catch-Up. Washington, DC: Evaluation System in Korea during the Period World Bank Group. . of 1999–2013 [한국의 국가연구개발 성과평가 (1999~2013) 전개와 특징].” Journal of Cirera, Xavier, Jaime Frías, Justin Hill, and Technology Innovation [기술혁신연구] 22, (4): Yanchao Li. 2020. A Practitioner’s Guide to 165–98. Innovation Policy. Instruments to Build Firm Capabilities and Accelerate Technological Bukstein, Daniel, Elisa Hernández, Lucía Catch-Up in Developing Countries. Monteiro, Martín Peralta, Clara Reyes, and Washington, DC: World Bank. Ximena Usher Güimil. 2020. “Evaluación de los programas de innovación empresarial Edler, Jakob, Martin Berger, Michael Dinges, de ANII, 2009-2018.” Agencia Nacional de and Abdullah Gök. 2012. “The Practice of Innovación e Investigación, Montevideo. Evaluation in Innovation Policy in Europe.” Research Evaluation 21 (3): 167–82. Chang, Woohyun [장우현]. 2020. “Status of Financial Performance Management and Edler, Jakob, Paul Cunningham, Abdullah Directions for Improvement [재정성과관리의 Gök, and Philip Shapira, eds. 2016. Handbook 현황과 개선방향].” Finance Forum [재정포럼], of Innovation Policy Impact. Northhampton, Korea Institute of Public Finance, April. MA: E. Elgar Publishing. European Commission. 2012. “Evaluation of Christensen, Tom, Per Laegreid, and Lois Innovation Activities: Guidance on Methods Recascino Wise. 2003. “Evaluating Public and Practices.” Directorate for Regional Management Reforms in Central Government: Policy. Norway, Sweden and the United States of America.” In Evaluation in Public-Sector Frias, Jaime, Yanchao Li, and Kyeyoung Reform: Concepts and Practice in Shin. 2020. “Digital Knowledge Management 70 Innovation policy Learning from Korea: the case of monitoring and evaluation (M&E) of R&D Policy and Information: The Case Jung, Sung-soo, Hoon-ho Kim, Jae-kum of South Korea’s National Science and Kim, and Se-hee Oh. 2010. “The Influences Technology Information Service.” Unpublished of the Bureaucrats’ Personnel Changes on Manuscript. Educational Policy Implementation.” Journal of Educational Administration 28 (4): 381– Frias, Jaime, Lee, Heejin. 2020. “What can 404. practitioners in developing countries learn Kang, Heewoo, Hanjun Park, Namho Kwon, from Korean Innovation Policy and its role and Youngmin Oh. 2018. “A Study on Usage of in promoting innovation and technological Performance Information under Performance learning?’. Unpublished Manuscript. Management System of Budgetary Programs in Korea [재정성과평가제도 환류방안에 관한 Gertler, Paul J., Martinez, Sebastian, 연구].” Korea Institute of Public Finance. Premand, Patrick, Rawlings, Laura B. and Vermeersch, Christel M. J. 2016. Impact Kaufmann, Jorge, Mario Sanginés, and Evaluation in Practice. Mauricio Garcia Moreno, eds. 2015. Building Effective Governments: Achievements Glennerster, Rachel and Takavarasha, Kudzai. and Challenges for Results-Based Public 2013. Running Randomized Evaluations: A Administration in Latin America and Practical Guide. Princeton, NJ: Princeton Caribbean. Washington, DC.: Inter-American University Press, 2013. Development Bank (IDB). Görgens, Marelize, and Jody Zall Kusek. 2009. KDI (Korea Development Institute). 2018a. Making Monitoring and Evaluation Systems “2018 In-depth Evaluation of Special Taxation: Work: A Capacity Development Toolkit. Tax Reduction for Small and Medium Start-up Washington, DC: World Bank. Enterprises, etc. [2018년 조세특례 심층평가 창업중소기업 등에 대한 세액감면].” Government of the Republic of Korea. 2018. “Plans for ‘Core Program Evaluation’ to KDI (Korea Development Institute). 2018b. Support Innovation in Government Finance [ “Evaluating the Effectiveness of Social 재정혁신을 뒷밤침하는 80대 “핵심사업 평가” Policies through a Policy Experiment System 추진계획].” [사회정책 효과성 평가를 위한 정책실험 도입방안 연구].” Gugerty, Mary Kay, and Dean Karlan. 2018. The Goldilocks Challenge: Right-Fit Evidence Kim, Haksoo. 2017. “Rationalizing the Tax for the Social Sector. Oxford, UK: Oxford Expenditure Performance Management University Press. System [조세지출 성과관리제도 합리화 방안].” Monthly Finance Forum. Industry Innovation and Science Australia. 2016. “Performance Review of the Australian Kim, Joohee [김주희]. 2019. “Operational Innovation, Science and Research System Status and Issues of the Special Taxation 2016.” Department of Industry, Science and Performance Evaluation System [조세특례 Resources, Canberra. 성과평가제도 운영현황 및 문제점].” The Audit and Inspection Research Institute, The Innovate UK. 2018. “Evaluation Framework: Board of Audit and Inspection of Korea. How We Assess Our Impact on Business and the Economy.” UK Research and Innovation, KIPF (Korea Institute of Public Finance). 2017. London. “2017 Special Taxation Ex-Ante Feasibility 71 Innovation policy Learning from Korea: the case of monitoring and evaluation (M&E) Study (I) – Tax Credits for SMEs’ Expenses 기본계획]”. Incurred for Applying for or Registering Patents [2017 조세특례 예비타당성평가(I) – MOEF (Ministry of Economy and Finance of 중소기업이 지출한 특허비용 세액공제].” the Republic of Korea). 2021. “2021 Master Plan for Tax Expenditures [2021년도 조세지출 KIPF (Korea Institute of Public Finance). 2018. 기본계획].” “Annual Guidelines for Self-evaluation of Budgetary Programs, 2005–2018 [재정사업 MSIP (Ministry of Science, ICT and Future 자율평가 연도별 지침 (2005–2018)].” Planning of the Republic of Korea). 2015. “The 3rd 5-Year Master Plan for Performance KIPF (Korea Institute of Public Finance). Evaluation of R&D Programs (Proposal) [제 2020a. “Performance Management System for 3차 국가연구개발 성과평가 기본계획(안) Public Finance [재정부문 성과관리체계].” (2016–2020)].” KIPF (Korea Institute of Public Finance). MSIT (Ministry of Science and ICT of the 2020b. “Tax Expenditure Performance Republic of Korea). 2018. “Guidelines for Management System [조세지출 성과관리 Self-Evaluation of National R&D Programs [ 제도].” 국가연구개발사업 자체평가 지침].” KISTEP (Korea Institute of S&T Evaluation and Planning). 2018. “Analysis of the Performance MSIT (Ministry of Science and ICT of of the Evaluation System for National the Republic of Korea). 2019. “2020 Research and Development Programs and Implementation Plan for Performance Research on Ways to Advance the Evaluation Evaluation for National R&D [2020년 System [국가연구개발사업 평가제도의 국가연구개발 성과평가 실시계획(안)].” 성과분석 및 고도화 방안 연구].” MSIT (Ministry of Science and ICT of the Korean Law Information Center. 2020. Act on Republic of Korea). 2020. “The 4th 5-Year the Performance Evaluation and Management Master Plan for Performance Evaluation of National Research and Development of R&D Programs (Proposal) [제4차 Projects, Etc., Act No.12871 (Dec. 30, 2014), 국가연구개발 성과평가 기본계획 (2021– Partial Amendment (S. Kor.). 2025) (안)].” Mackay, Keith. 2004. “Two Generations of MSIT (Ministry of Science and ICT of the Performance Evaluation and Management Republic of Korea) and KISTEP (Korea System in Australia.” ECD Working Paper Institute of S&T Evaluation and Planning). Series No. 11. Washington, DC: World Bank, 2019. “National Research and Development Operations Evaluation Department. Follow-up Evaluation Report [2018년도 국가연구개발사업 상위평가 보고서 [ MOEF (Ministry of Economy and Finance of 추적평가]].” the Republic of Korea). 2019. “2019 Master Plan for Tax Expenditures [2019년도 조세지출 MSIT (Ministry of Science and ICT of the 기본계획].” Republic of Korea) and KISTEP (Korea Institute of S&T Evaluation and Planning). MOEF (Ministry of Economy and Finance of 2021. “National Research and Development the Republic of Korea). 2020. “2020 Master Follow-up Evaluation Report [2020년도 Plan for Tax Expenditures [2020년도 조세지출 국가연구개발사업 상위평가 보고서 [ 72 Innovation policy Learning from Korea: the case of monitoring and evaluation (M&E) 추적평가]].” Park, Nowook [박노욱]. 2007. “Status and Evaluation of Korea’s Financial Performance Moynihan, Donald, and Ivor Beazley. 2016. Management System [우리나라 재정 Toward Next-Generation Performance 성과관리제도의 현황과 평가].” Finance Budgeting: Lessons from the Experiences of Forum [재정포럼], Korea Institute of Public Seven Reforming Countries. Washington, DC: Finance, September. World Bank. Restriction of Special Taxation Act. Wholly NABO (National Assembly Budget Office). Amended by Act No. 5584, Dec. 28, 1998 (S. 2014. “Understanding the National Finance Kor.). Korean Law Information Center. 2020. Act [국가재정법 이해와 실제].” Robinson, Marc, and Jim Brumby. 2005. NABO (National Assembly Budget Office). “Does Performance Budgeting Work? An 2017. “2017 Special Taxation: Research on Analytical Review of the Empirical Literature.” the System and Explanation [2017 조세특례: IMF Working Paper No. 05/210, International 제도연구와 해설].” Monetary Fund: Washington, DC. NABO (National Assembly Budget Office). Saaty, Thomas L. 1977. “A Scaling Method for 2020. “An Evaluation of the Performance Priorities in a Hierarchical Structure.” Journal Management System for Fiscal Activities [ of Mathematical Psychology 15 (3): 234–81. 재정활동의 성과관리체계 평가].” Saaty, Thomas L. 1988. What is the Analytic NARS (National Assembly Research Hierarchy Process?” In Mathematical Models Service). 2018. “Impact of the Special for Decision Support, edited by Mitra G., H.J. Taxation Performance Evaluation System Greenberg, F.A. Lootsma, M.J. Rijkaert, and on Lawmaking [조세특례 성과평가 제도의 H.J. Zimmermann. NATO ASI Series (Series 입법영향분석].” F: Computer and Systems Sciences), vol 48. Berlin: Springer. Noh, Meansun, and Samyoul Lee. 2014. “Improving Tax Support Systems on R&D.” STEPI (Science & Technology Policy Institute). Innovation Studies 9 (2): 49–76. 2003. “The Analysis of the System and Structure of the Korean Government R&D Nordesjö, Kettil. 2019. “Made in Sweden: Programs and Policy Recommendations [ The Translation of a European Evaluation 정부연구개발사업의 체계·구조분석 및 Approach.” Evaluation 25 (2): 189–206. 정책제언].” OECD (Organisation for Economic Co- STEPI (Science & Technology Policy Institute). operation and Development). 2017. 2015. “Qualitative Evaluation of National R&D “Governing Better through Evidence-Informed Programs: Current State and Future Direction Policy Making.” Conference summary. [국가연구개발 정성평가 현황과 발전방향].” OECD, (Organisation for Economic Co- UK (United Kingdom), HM Treasury. 2020. The operation and Development). 2022. Magenta Book: Central Government Guidance “Main Science and Technology Indicators: on Evaluation. London. Highlights. https://www.oecd.org/sti/msti- highlights-march-2022.pdf World Bank. 2015. “Restructuring Paper on a Proposed Project Restructuring of the 73 Innovation policy Learning from Korea: the case of monitoring and evaluation (M&E) Institutions Building Technical Assistance Project (IBTAL) to the Republic of Uruguay.” Report No. RES18226. World Bank, Washington, DC. Yoon, Chong-Min. 2014. “A Study on the Regulations of National R&D Performance Management System.” Journal of Korea Technology Innovation Society 17 (3): 519–39. Zaltsman, Ariel. 2006. “Evaluation Capacity Development: Experience with Institutionalizing Monitoring and Evaluation Systems in Five Latin American Countries: Argentina, Chile, Colombia, Costa Rica and Uruguay,” ECD Working Paper Series 16. World Bank, Independent Evaluation Group, Washington, DC. 74 Innovation policy Learning from Korea: the case of monitoring and evaluation (M&E) Appendices 75 Innovation policy Learning from Korea: the case of monitoring and evaluation (M&E) Appendix A. Timeline of preparation and submission of annual performance plans and performance reports Year Date Step y-1 April • Guidelines for Performance Plan delivered to line ministries May 30 • Submission of Performance Plans to the Ministry of Economy and Finance (MOEF) June–July • Prereview by the MOEF and requests for improvement By 120 days prior to the fiscal • Submission of Performance Plans by year (y) the MOEF to the National Assembly as attachments to the Budget Bill By December 31 • Adjustments to Performance Plans based on the MOEF’s requests and results of deliberation by the National Assembly y January–December • Execution of budget y+1 January • Guidelines for Performance Report delivered to line ministries By February 28 • Submission of Performance Reports to the MOEF April 10 • Submission of Performance Reports to the Board of Audit and Inspection By May 20 • Inspection of Performance Reports by the Board of Audit and Inspection By May 31 • Submission of Performance Reports as part of the report of settlement of accounts to the National Assembly 76 Innovation policy Learning from Korea: the case of monitoring and evaluation (M&E) Appendix B. Timeline of preparation and submission of line ministries’ self-evaluation of budgetary programs, 2017 Year Date Step Preparation December 2016 • The Ministry of Economy and Finance (MOEF) finalizes its guidelines for line ministries’ self-evaluations. • The MOEF holds information sessions for line ministries’ officials in charge of conducting self- evaluations. Self-evaluation January 2017 • Line ministries establish their own self-evaluation plans. January–April • Line ministries conduct self-evaluations of their own budgetary programs. April 30 • Line ministries submit self-evaluation results to the MOEF. Release of May 1–15 • The MOEF corrects errors in self-evaluations. evaluation results May 15 • The MOEF sends self-evaluation results to the Budget Office. May 20 • Self-evaluation results are made public. Source: Original table for this report, based on KIPF 2018, 1478. Note: This table provides 2017 as an example; specific dates are subject to change. 77 Innovation policy Learning from Korea: the case of monitoring and evaluation (M&E) Appendix C. Timeline of ex ante feasibility studies and ex post in-depth evaluations Date Step December • The Ministry of Economy and Finance (MOEF) selects research and development (R&D) tax incentive plans subject to evalua- tion with a review by the Special Taxation Performance Evalu- ation Advisory Council. January • Submission of research proposals and selection of researchers. • Final agreements are delivered to the Special Taxation Evaluation Team of the MOEF. February–March • Inaugural meeting is held to discuss and agree on planning, in coordination with the MOEF (Tax Relief Division of the Tax and Customs Office). • Meetings with the National Tax Service are held to request relevant materials and data, with the MOEF playing the role of coordinator. April-March • Mid-term report meeting is held. • Report, presentation materials, and list of participants are sent to the MOEF June • Final report (with draft) meeting.Final report, presentation materials, and list of participants are sent to the MOEF.Final report, presentation materials, and list of participants are sent to the MOEF. July–August • Edits are made to the report to reflect feedback collected from the final report meeting. Early September • Submission of final report to the Tax Relief Division of the MOEF. • Ex post in-depth evaluations and ex ante feasibility studies are submitted to the National Assembly no later than 120 days prior to the beginning of the upcoming fiscal year. Source: Original table for this report, based on KIPF 2020a. 78 Innovation policy Learning from Korea: the case of monitoring and evaluation (M&E) Appendix D. Timeline of self-evaluation of research and development tax incentives (RDTI) by line ministries Date Step By March 31 • Each year, the Ministry of Economy and Finance (MOEF) develops a Master Plan for Special Taxation and Restrictions (based on the Restriction of Special Taxation Act, art. 142, para. 1), which lists research and development (R&D) tax incentives subject to self-evaluations and notifies responsible ministries. By April 30 • Individual ministries submit their Tax Expenditure Evaluation Reports and Tax Expenditure Proposals to the MOEF. The reports are required, while the proposals are optional. The MOEF then sends the collected reports and proposals to the Korea Institute of Public Finance (KIPF). By May 31 • Legally designated as a qualified research institution, KIPF conducts completeness checks. June–August • KIPF and the individual ministries improve and complete the Tax Expenditure Evaluation Reports and Tax Expenditure Proposals by exchanging feedback. September • Final reports and proposals are submitted to the MOEF. Source: Original table for this report, based on KIPF 2020b. 79 Innovation policy Learning from Korea: the case of monitoring and evaluation (M&E) Seoul Center for Finance and Innovation