H N P D i s c u s s i o N P a P e R 59885 DEVELOPING STRATEGIES FOR IMPROVING HEALTH CARE DELIVERY Guide to Concepts, Determinants, Measurement, and Intervention Design Elizabeth H. Bradley, Sarah Pallas, Chhitij Bashyal, Peter Berman and Leslie Curry June 2010 DEVELOPING STRATEGIES FOR IMPROVING HEALTH CARE DELIVERY Guide to Concepts, Determinants, Measurement, and Intervention Design Elizabeth H. Bradley, PhD Sarah Pallas, MPhil Chhitij Bashyal, BA Peter Berman, PhD Leslie Curry, PhD June, 2010 Health, Nutrition and Population (HNP) Discussion Paper This series is produced by the Health, Nutrition, and Population Family (HNP) of the World Bank's Human Development Network (HDN). The papers in this series aim to provide a vehicle for publishing preliminary and unpolished results on HNP topics to encourage discussion and debate. The findings, interpretations, and conclusions expressed in this paper are entirely those of the author(s) and should not be attributed in any manner to the World Bank, to its affiliated organizations or to members of its Board of Executive Directors or the countries they represent. Citation and the use of material presented in this series should take into account this provisional character. Enquiries about the series and submissions should be made directly to the Editor, Homira Nassery (hnassery@worldbank.org). Submissions undergo informal peer review by selected internal reviewers and have to be cleared by the TTL's Sector Manager. The sponsoring department and author(s) bear full responsibility for the quality of the technical contents and presentation of material in the series. Since the material will be published as presented, authors should submit an electronic copy in the predefined template (available at www.worldbank.org/hnppublications on the Guide for Authors page). Drafts that do not meet minimum presentational standards may be returned to authors for more work before being accepted. For information regarding the HNP Discussion Paper Series, please contact Homira Nassery at hnassery@worldbank.org, or 202-522-3234 (fax). © 2011 The International Bank for Reconstruction and Development / The World Bank 1818 H Street, NW Washington, DC 20433 All rights reserved. ii Health, Nutrition and Population (HNP) Discussion Paper Developing Strategies for Improving Health Care Delivery: A User's Guide to Concepts, Determinants, Measurement, and Intervention Design Elizabeth H. Bradley,a Sarah Pallas,a Chhitj Bashyal,a Leslie Curry,a Peter Bermanb a Yale Global Health Leadership Institute, Yale School of Public Health, New Haven, USA b Health, Nutrition, and Population Unit, Human Development Network, World Bank, Washington DC., USA Paper prepared for World Bank, Washington DC., USA Abstract: This report is a users guide for defining, measuring, and improving the performance of health service delivery organizations. We define six core performance domains: quality, efficiency, utilization, access, learning, and sustainability and provide a compendium of metrics that have been used to measure organizational performance in each of these six domains. The compendium, which includes 116 distinct categories of metrics, is based on a detailed literature review of peer-reviewed empirical studies of health care organizational performance in World Bank client countries. We include a bibliography of studies that have used these measures. Based on our reading of the literature, we define seven major strategy areas potentially useful for improving performance among health care organizations: 1) standards and guidelines, 2) organizational design, 3) education and training, 4) process improvement and technology and tool development, 5) incentives, 6) organizational culture, and 7) leadership and management. We provide illustrations of facility-level interventions within each of the strategy areas and highlight the conditions under which certain strategies may be more effective than others. We propose that the choice of strategy targeted at organizational level to improve performance should be informed by the identified root causes of the problem, the implementation capabilities of the organization, and the environmental conditions faced by the organization. Measuring and improving organizational performance is complex because organizations are diverse and dynamic. Users of this guide should take away a toolkit of concepts and methods that can help them identify which questions to ask and how to answer them in the context of defining, measuring, and improving performance of health service delivery organizations. Having this broad set of tools with which to understand and enhance organizational performance can contribute to improving health service delivery and ultimately health outcomes. Keywords: health care delivery, user guide, quality improvement, organizational performance, strategic development iii Disclaimer: The findings, interpretations and conclusions expressed in the paper are entirely those of the authors, and do not represent the views of the World Bank, its Executive Directors, or the countries they represent. Correspondence Details: Peter Berman, 1818 H St. NW, Washington DC., 20433 USA, tel: 202-458 2676, fax: 202-522 3234, Email: pberman@worldbank.org, website:www.worldbank.org/hnp iv Table of Contents FOREWORD............................................................................................................................. VII HOW TO USE THIS GUIDE ........................................................................................................... VII ACKNOWLEDGEMENTS ....................................................................................................... IX I. MEASURING ORGANIZATIONAL PERFORMANCE IN HEALTH CARE DELIVERY: SIX OUTCOMES .................................................................................................. 1 Quality ..................................................................................................................................... 3 Efficiency ................................................................................................................................. 3 Utilization ................................................................................................................................ 4 Access ...................................................................................................................................... 4 Learning................................................................................................................................... 5 Sustainability ........................................................................................................................... 5 Empirical Measures of Performance Domains in World Bank Client Countries ................... 5 II. DETERMINANTS OF ORGANIZATIONAL PERFORMANCE: THEORETICAL FRAMEWORKS........................................................................................................................... 7 II.A. BRIEF HISTORY OF ORGANIZATIONAL BEHAVIOR AND ORGANIZATIONAL THEORY ........... 7 II.B. COMPARING OB/OT WITH ECONOMICS, PSYCHOLOGY, AND SOCIOLOGY.......................... 10 Economics .............................................................................................................................. 10 Psychology ............................................................................................................................. 11 Sociology ............................................................................................................................... 12 II.C CONCEPTUAL FRAMEWORKS OF ORGANIZATIONAL PERFORMANCE WITHIN HEALTH SYSTEMS .................................................................................................................................... 14 II D. DETERMINANTS OF HEALTH SERVICE DELIVERY ORGANIZATION PERFORMANCE ............. 17 II.E. CREATING ENABLING ENVIRONMENTAL CONDITIONS ....................................................... 18 III. STRATEGIES FOR IMPROVING ORGANIZATIONAL PERFORMANCE: A CLASSIFICATION SYSTEM................................................................................................... 19 IV. IDENTIFYING AREAS FOR PERFORMANCE IMPROVEMENT PROGRAMS: METHODS .................................................................................................................................. 24 IV.A. SELECTING MEASURES FOR ASSESSING ORGANIZATIONAL PERFORMANCE ..................... 24 Principle 1: Include Performance Metrics from Each Intermediate Outcome Domain ....... 24 Principle 2: Use Existing Data Sources Where Possible ...................................................... 25 Principle 3: Test Reliability and Validity of Metrics in the Context of Interest .................... 25 Principle 4: Weigh Costs and Benefits of Internal and External Data Collection................ 25 Principle 5: Engage Stakeholders in Assessment Process .................................................... 26 Principle 6: Estimate Resources Required for Data Collection............................................ 26 Principle 7: Align Data Collection Methods to Fit with Domain ......................................... 26 IV.B. IDENTIFYING PERFORMANCE GAPS USING DIAGNOSTIC ASSESSMENT RESULTS .............. 27 Within-Country Comparison ................................................................................................. 27 Cross-Country Comparison ................................................................................................... 27 Comparison with Domestic or International Standards........................................................ 28 IV.C. ASSESSING ROOT CAUSES OF PERFORMANCE GAPS ......................................................... 28 Multidisciplinary Teams ........................................................................................................ 28 v Qualitative and Mixed Methods ............................................................................................ 29 Root Cause Analysis .............................................................................................................. 29 IV.D. SELECTING STRATEGIES ................................................................................................... 29 Assessing Environmental Conditions .................................................................................... 30 Assessing Implementation Capability.................................................................................... 30 Assessing Extent of Strategies Already in Progress .............................................................. 31 Identifying Positive Deviants and Proven Strategies ............................................................ 34 Criteria for Selecting Strategies ............................................................................................ 34 IV.E. MONITORING PROGRESS IN PERFORMANCE IMPROVEMENT INTERVENTIONS .................... 35 V. CONCLUSION: USING THEORY TO INFORM PRACTICE ................................ 37 APPENDIX 1: EMPIRICAL LITERATURE REVIEW ....................................................... 39 METHODOLOGY ......................................................................................................................... 39 RESULTS..................................................................................................................................... 40 Studies by Performance Intermediate Outcome Domain and Dimension ............................. 41 Studies by Measurement Method ........................................................................................... 42 Studies by Geographic Region of World Bank Client Countries........................................... 43 Studies by Type of Health Service, Health Facility Unit, and Cross-Cutting Themes .......... 43 Instructions for Looking Up Article References .................................................................... 44 APPENDIX 2: ARTICLE REFERENCES BY DOMAIN, DIMENSION & SUB- DIMENSION OF PERFORMANCE ......................................................................................... 45 APPENDIX 3: ARTICLE REFERENCES BY MEASUREMENT METHOD ........................ 51 APPENDIX 4: ARTICLE REFERENCES BY WORLD BANK GEOGRAPHIC REGION ... 52 APPENDIX 5: ARTICLE REFERENCES BY TYPE OF HEALTH SERVICE ..................... 53 APPENDIX 6: ARTICLE REFERENCES BY HEALTH FACILITY DEPARTMENT / UNIT ....................................................................................................................................................... 54 APPENDIX 7: ARTICLE REFERENCES BY CROSS-CUTTING THEME .......................... 55 APPENDIX 8: BIBLIOGRAPHIC INFORMATION FOR ARTICLE REFERENCES ......... 56 APPENDIX 9: REFERENCES FOR ASSESSING METRIC RELIABILITY AND VALIDITY ....................................................................................................................................................... 73 APPENDIX 10: REFERENCES FOR USE OF QUALITATIVE AND MIXED METHODS 74 WORKS CITED ........................................................................................................................... 75 vi FOREWORD This report is a guide to defining, measuring, and improving performance of health service delivery organizations. Its scope is limited to frontline health service delivery organizations that interface directly with patients, such as hospitals, clinics, and pharmacies. References in this report to organizations and organization-level interventions should therefore be interpreted in terms of these frontline health service delivery facilities. The objective of this report is to identify determinants of organizational performance in health services and to provide examples of interventions that can be undertaken at the organization level to improve performance. These organizations are part of broader health systems at the sub- national and national levels and are both influenced by and reciprocally influence these broader health systems. However, different organizations within the same health system can vary substantially in their performance. Although this report focuses on organization-level performance factors, it also comments on ways in which an organizations broader environment can enable or inhibit performance, and the contextual conditions under which performance improvement strategies will be most effective. The report is organized as follows. In Section I, we propose a taxonomy of six intermediate outcomes that constitute domains of organizational performance, sub-classified by dimensions within each domain. We also provide examples of measures within each domain. In Section II, we review theoretical frameworks and models that provide insight into the determinants of organizational performance with attention to alternative disciplinary perspectives. In Section III, we propose a classification system of strategies shown to be effective in improving performance of health care organizations, and suggest conditions under which various strategies may be more or less effective. In Section IV, we provide guidance for performance assessment methods, including selecting which metrics to include in a diagnostic assessment and how to use assessment results to identify performance gaps. We also discuss common methods for selecting performance improvement strategies and evaluating the impact of interventions. We conclude in Section V with a brief discussion of principles for using the theoretical material to inform practice. The Appendix includes results from a systematic review of more than 2,000 academic articles regarding metrics for the six organizational performance domains, as applied in the World Banks client countries. HOW TO USE THIS GUIDE This guide provides an orientation to the theoretical and empirical literature on organizational performance and recommends principles and methods for applying this literature in the field. It can be used to inform and support training, client country needs assessment, project planning, and project monitoring. Table 1 offers several examples of how the guide could be used to answer questions about organizational performance; these examples are illustrative of the guides potential but not exhaustive. vii Table 1: Examples of How to Use this Guide to Define, Measure, and Improve Performance Illustrative Questions How to Use this Guide What elements should I include in my See Section I for a taxonomy of performance domains. assessment of health care facility See Section IV.A for principles on selecting measures to performance in the country? include in assessment. See Appendices 1-8 for examples of performance measures used in empirical studies from World Bank client countries. What standard should I use for See Section IV.B for guidance on several comparison judging how well health care facilities methods for identifying performance gaps. in the country are performing? What metrics have been used to See Appendix 2 to look up "Quality" under Domain and measure the quality of health care "Managerial quality" under Dimension for a list of empirical facility management? studies with relevant metrics. See Appendix 8 for bibliographic references. Read original studies for details on measurement methods and metrics. What factors might be causing low See Section II for summaries of different disciplinary performance in health care facilities? approaches to explaining organizational performance. What strategies might be successful in See Section III for summary tables matching strategies to addressing nurses lack of motivation root causes and conditions for effectiveness. in government-run health care See Sections IV.C and IV.D for recommended methods to facilities? determine root causes and select suitable strategies. How can I know if the performance See Section IV.E for suggestions on study designs, data improvement strategy we are infrastructure, and other prerequisites for effective implementing is working? monitoring of implementation progress and impact. This guide does not provide a universal instrument for performance measurement; instead it presents a process that can be applied in diverse country contexts to tailor performance assessment and interventions to local conditions. The right assessment tool and improvement strategy will depend on the organizations local context. This guide should be used to help health sector project managers and decision makers systematically consider the different explanations for health service organizations performance and the different options for intervention. viii ACKNOWLEDGEMENTS The authors would also like to acknowledge helpful comments received from the participants at the Expert Consultation on Organization and Service Delivery in Health Care in Low Income Settings in Berkeley, California, on April 16-17, 2010, and especially the helpful inputs from Professor Stephen Shortell, Professor of Health Policy and Management and Dean at the University of California at Berkeley School of Public Health. As usual, we remain responsible for any errors. The authors are grateful to the World Bank for publishing this report as an HNP Discussion Paper. ix x I. MEASURING ORGANIZATIONAL PERFORMANCE IN HEALTH CARE DELIVERY: SIX OUTCOMES We propose six intermediate outcomes at the organizational level that contribute to the final outcomes of improved health status and risk protection at the health system level. These six intermediate outcomes are quality, efficiency, utilization, access, learning, and sustainability. A taxonomy of intermediate outcomes, or organizational performance domains, further identifies key dimensions of each domain and provides examples of measures within each domain (Table 2). Four of the six intermediate outcomes (quality, efficiency, utilization, and access) are consistent with previous health system frameworks (IOM 2001; Roberts et al. 2004; De Savigny and Adam 2009). Learning and sustainability have been added based on evidence from the health services literature that they contribute to the desired outcomes of improved health status and risk protection (for example, Tucker and Edmondson 2003; Pluye et al. 2004; Gruen et al. 2008). Organizational learning is necessary to keep pace with evolving disease threats and changing environmental conditions. Because improved health status is not a static outcome, organizations must be able to acquire and utilize new knowledge to achieve this goal. Sustainability, long a guiding principle for development assistance, contributes to improved health status and risk protection by ensuring that needed health services are predictably accessible. The epidemiological transition from acute to chronic disease makes sustainability in health services even more important for continuity of care and effective disease management over time. 1 Table 2: A Taxonomy of Organizational Performance Domains, Dimensions, and Illustrative Measures Intermediate Dimensions Examples of Measures Outcome Domains Adherence to clinical guidelines Clinical quality Avoidance of medical errors QUALITY Management quality Availability of medical supplies Patient experience Functional medical records system functional Patient satisfaction Cost-to-service ratios Nurses or health workers per bed EFFICIENCY Staff-to-service ratios Inpatient or outpatient visits per day, per bed, or Patient or procedure volume per health worker Patient or procedure volume Percent occupancy relative to capacity UTILIZATION Outpatient visits per provider Patient or procedure volume relative to population health Percentage of pregnant women receiving antenatal characteristics care Physical access Financial access Geographic distance to facility Linguistic access Availability of transport to facility Information access Hours of operation of facility ACCESS Service availability / allocation Absenteeism of health care workers from facility Non-discriminatory service Affordability of services provision (equitable treatment Availability of culturally and linguistically regardless of age, gender, race, appropriate services ethnicity, religion, class, etc.) Use of balanced scorecard for organizational Data audit and feedback performance processes LEARNING Presence of patient suggestion box Innovation adoption System exists for nurses to report errors to Training/continuing education hospital management for healthcare workforce Quality improvement methods used Involvement of community leaders in facility Political support planning and monitoring Community and patient support Use of strategic management process to promote SUSTAINABILITY Financial support organizational fit with environmental conditions Human resource supply Timely, useable, and monitored data on facility Staff commitment financial status Strategic planning Robust connection with health workforce educational pipeline 2 Together, these six intermediate outcomes offer a model for what a high performing health service organization should deliver. High performing organizations should deliver high quality, efficient, accessible, and utilized services. Furthermore, high performing organizations should enable learning (and hence continuous improvement) and have strategies for securing support necessary for sustainability. These six organizational outcomes capture both historical and contemporary foci of organizational performance research in the health services sector. Because this report is designed as a guide to this vast literature, it takes a panoramic view that includes the multiple dimensions that have been used for each intermediate outcome.1 Quality Historically, much of the research on health care delivery has focused on clinical quality, investigating whether the care provided to a patient was safe and medically appropriate (Donabedian 1980; Schuster et al. 1998; IOM 2001). Clinical quality refers to whether the providers care conformed to best clinical practice for those who use the services of the organization; it does not refer to outcome measures of population health, such as vaccination or antenatal care coverage, in which the denominator is the population. Ensuring clinical quality remains a major objective of health service delivery organizations in both high-and low-income countries. We also include managerial quality and patient experience within the quality intermediate outcome domain. Managerial quality refers to the degree to which administrative systems such as procurement, human resources, and data management support the delivery of high-quality clinical care (Moss and Garside 1995; Egger et al. 2005). Administrative systems also influence other organizational intermediate outcomes like efficiency, access, and learning; the contribution of a given managerial process to organizational performance must therefore be assessed according to multiple intermediate outcome criteria. Patient experience is included within quality because of the importance of patient-centered service delivery, for which patient experience is an often used as a measure and a counterpoint to the technical standards of clinical and managerial quality (Aharony and Strasser 1993; Ford et al. 1997; Reinhardt 1998; Safran 2003; Safran et al. 2006). Efficiency In the context of individual organizations and their performance, the term efficiency refers to what economists call technical, rather than allocative, efficiency (Hollingsworth 2008; Rosko and Mutter 2010). Hence, efficiency in this context is a relative measure that compares inputs used (e.g., human, technological, financial) to outputs attained (number and level of services) (Hollingsworth 2008). Efficiency has received substantial research attention in health services delivery as health care costs have increased in high-income countries due to shifts in technology, market structure, and demographic profiles (Sherman 1984; Fishman et al. 2004; Negrini et al. 1 From the perspective of health systems research, there is broad consensus around quality, efficiency, and utilization as intermediate outcomes for system performance (Roberts et al.2004). There is debate in the literature over the definition of access as an intermediate outcome; some frameworks include only realized access as measured by utilization while other frameworks use the term to refer to potential access or a right to access health care (cf. Roberts et al. 2004; de Savigny and Adam 2009). We have therefore included utilization and access as separate performance domains to reflect both sides of this debate. In this guide we adopt a facility-based perspective informed by organizational behavior and theory; from this perspective, learning and sustainability are equally important intermediate outcomes for organizational performance and are thus included in our taxonomy. 3 2004; Rosko and Mutter 2008, 2010). We include efficiency (cost-per-output) rather than total cost because efficiency allows for greater comparability across countries and communities with diverse economic profiles. Even when a total cost measure has been standardized for comparability across countries (e.g., percent of GDP spent on health care), this is often done with reference to some other organizational intermediate outcome like quality or to an outcome like population health status, which essentially transforms the total cost measure to one of efficiency (for example, Peterson 1993; Kissick 1994). We also include under efficiency the literature that addresses value creation in health care, as value is essentially another form of output-to-input ratio that captures an organizations ability to produce a given quality of service for a lesser price or a greater quality of service for the same price (Burns 2002; Shortell 2004; Porter and Teisberg 2006). Utilization Utilization is the volume of services delivered or of clients served. While straightforward to measure as an intermediate outcome, setting standards for the ,,right level of utilization can be difficult due to the influence of diverse and variable client demand patterns (Green and Nguyen 2001). In our taxonomy, we consider utilization as an organizational performance intermediate outcome relative to organizational capacity or population health characteristics. From this perspective, an organization with chronically underutilized capacity would be considered a lower performing organization. Some excess capacity may be desirable, as such slack can facilitate organizational learning and long-run sustainability (Zinn and Flood 2009). However, too much excess capacity can constitute a cost to the organization without adequate compensatory benefits (Pauly and Wilson 1986; Keeler and Ying 1996). Similarly, utilization significantly below or above what would be expected given the health characteristics of its client population could also be a signal of poorly performing organizations (Wennberg et al. 1987; Fisher et al. 2000). Benchmarking utilization across organizations serving similar populations is therefore an important method for assessing this intermediate outcome (Murphy and Noetscher 1999). Access Access refers to the potential ability of an organizations potential clients to obtain its services. When this potential ability is realized, it results in observable utilization, which is why studies often use utilization as a proxy variable for access; conversely, lack of utilization can signal the existence of barriers to access (for example, Hall 1998; OMahony et al. 2008). However, access and utilization are conceptually distinct intermediate outcomes, as an individual may have access to an organization but choose not to utilize services there (Fiedler 1981). Consistent with the literature, we use access to refer to the availability, accessibility, accommodation, affordability, and acceptability of health services (Penchansky and Thomas 1981; Peters et al. 2008). In these definitions, acceptability refers to both the patients experience with the services provided and the providers non-discriminatory acceptance of the patient as a client. However, we categorize patient experience under quality rather than as a measure of access because patient experience is conditional on accessing care. While a negative care experience could deter a patient from accessing health services in the future, we focus here on those elements of access that are not conditional on receiving care. 4 Access is sometimes discussed in terms of equity or geographical or financial coverage of health services, but these latter terms tend to apply more to health system perspectives on service delivery rather than organization-level models (Aday and Andersen 1981; Gold 1998; IOM 2001; Victora et al. 2003). Although enabling physical and financial access for geographically dispersed populations or providing discretionary health care services are typically beyond organizations jurisdiction, other aspects of access are nonetheless influenced by organizational action and are therefore included as organization-level intermediate outcomes. Hence, we retain access, but exclude equity or coverage, as the organizational performance intermediate outcomes in our taxonomy. Learning Learning refers to the process by which an organization acquires new knowledge and translates this knowledge into organizational practices. This performance intermediate outcome is not only learning by individuals within the organization but "the assimilation of individual knowledge into new work structures, routines, and norms" that can outlast any individual staff member (Davies and Nutley 2000, p. 998). Thus, organizational learning generates both changes in knowledge as well as changes in observable processes and organizational culture (Levitt and March 1988; Senge 1990; March 1991). Given the knowledge-centric nature of health services delivery and the importance of learning from adverse events, organizational learning in health services delivery has received increasing attention from academic researchers and endorsement from expert panels like the U.S. Institute of Medicine (for example, Berta et al. 2005; Chuang et al. 2007; Nembhard 2009; IOM 2001). Sustainability Sustainability is the organizations ability to continue delivering needed and valued services. Dimensions of sustainability include sustained political support from government officials, sustained community and patient support, and predictable access to needed inputs (e.g., financing, trained human resources) (Olsen 1998). As an organizational performance intermediate outcome, sustainability is measured in terms of both the organizations existing support and its strategies and efforts to secure future support (Gruen et al. 2008). We focus in this guide on the sustainability of needed and valued health services delivered by the organization; we do not examine the important but separate stream of literature regarding how to sustain health services and their benefits beyond the organization through community adoption of preventive behaviors or transition from external funding to local funding (for example, Shediac-Rizkallah and Bone 1998; Sarriot et al. 2004). Empirical Measures of Performance Domains in World Bank Client Countries We conducted a structured review of the academic literature in order to identify metrics and methods used to measure the six performance intermediate outcomes, as defined above, in countries eligible for World Bank support. Although many measures from high-income countries are equally applicable in middle- and low-income countries, some metrics refer to infrastructure or services that may not be generally available among health care organizations in low-income settings. In our literature review, we sought to address this applicability question by 5 including only studies that had been conducted in countries eligible as of 2009 for World Bank support (World Bank 2009). Our initial search retrieved 2,371 articles, which were systematically reviewed to yield a final sample of 181 articles for analysis. The articles contained hundreds of indicators for measuring the six performance domains, which were grouped into 116 unique sub-dimensions or conceptually distinct sets of metrics. Among the articles in the final sample, quality was by far the most commonly measured domain of organizational performance (83% of articles). Access was measured in 20% of the sample, utilization in 17%, learning and efficiency in 10% each, and sustainability in 9% of articles (percentages add to more than 100% as some articles measured multiple performance domains). The methodology and full results from this literature review are presented in Appendices 1-8. Results include summary tables of the final sample articles by World Bank region, by measurement method, by health service area, by health facility unit/department, and by cross-cutting theme (e.g. safety, information technology). Appendix 1 gives instructions for identifying those studies in the sample with relevant metrics in each of these categories. Bibliographic references for the studies are found in Appendix 8. Together, these six intermediate outcomes offer a model for what a high performing health service organization should deliver. High performing organizations should deliver high quality, efficient, accessible, and utilized services. Furthermore, high performing organizations should enable learning (and hence continuous improvement) and have strategies for securing support necessary for sustainability. These six organizational outcomes capture both historical and contemporary foci of organizational performance research in the health services sector. Because this report is designed as a guide to this vast literature, it takes a panoramic view that includes the multiple dimensions that have been used for each intermediate outcome.2 2 From the perspective of health systems research, there is broad consensus around quality, efficiency, and utilization as intermediate outcomes for system performance (Roberts et al.2004). There is debate in the literature over the definition of access as an intermediate outcome; some frameworks include only realized access as measured by utilization while other frameworks use the term to refer to potential access or a right to access health care (cf. Roberts et al. 2004; de Savigny and Adam 2009). We have therefore included utilization and access as separate performance domains to reflect both sides of this debate. In this guide we adopt a facility-based perspective informed by organizational behavior and theory; from this perspective, learning and sustainability are equally important intermediate outcomes for organizational performance and are thus included in our taxonomy. 6 II. DETERMINANTS OF ORGANIZATIONAL PERFORMANCE: THEORETICAL FRAMEWORKS Diverse disciplinary Figure 1: OB and OT Levels of Analysis perspectives have been applied to explain the performance of organizations in general and of ENVIRONMENT Organization B health service delivery organizations in particular. This section provides a brief history of approaches from organizational behavior and Organization A organizational theory and compares these approaches from economics, with OT psychology, and sociology. It OB reviews several leading models Organization of organizational performance BB C within the health system and key strategies by which organizational performance can be improved. II.A. BRIEF HISTORY OF ORGANIZATIONAL BEHAVIOR AND ORGANIZATIONAL THEORY Organizational behavior (OB) and organizational theory (OT) are two major complementary subfields in the academic study of organizations. OB focuses on the micro-level dynamics within an individual organization, such as interactions among staff and internal resource flows whereas OT addresses the macro-level dynamics of the organization as a whole in its interactions with other organizations and its environment (Figure 1). The unit of analysis in OB studies is usually the individual or a team of individuals within the organization. The unit of analysis in OT is the organization, or sometimes a system of organizations such as an industry within a given geography. Today OB and OT tend to be associated with management and business administration, although their development has been influenced by economics, psychology, and sociology. Both OB and OT have addressed the performance of health service delivery organizations. Some questions about organizational dynamics span both OB and OT. For example, the role of leadership has been studied as a factor in internal staff team performance (OB) as well as in external organizational strategy (OT) (Yukl 1989; Klein et al. 2006; Gilmartin and DAunno 2007). Organizations quality improvement efforts have been explained using models of staff incentives and education (OB) as well as models of market competition and regulatory pressures (OT) (Flood and Fennell 1995). Interventions to improve organizational performance are possible at multiple levels, and successful interventions will typically require both OB and OT perspectives. 7 The historical development of OB/OT studies over the past century can be understood in broad terms as an evolution from closed to open system theories. Closed systems theories are those that conceptualize the organization as an isolated entity whose dynamics are independent of other organizations or its environment (Scott 1961; Kast and Rosenzweig 1972). Closed systems theories seek to explain the behavior of individuals within the organization in terms of factors internal to the organization (such as working conditions, reporting hierarchies, staff relationships, and monetary or non-monetary incentives) (for example, Taylor 1916; Fayol 1916; Gantt 1919; Gulick and Urwick 1937; Mooney 1947). The creators of these theories did not themselves use the term ,,closed systems; this was coined later by open systems theorists who were contrasting their new approaches with historical understandings of organizational performance (Scott 1961; Katz and Kahn 1966; Ashmos and Huber 1987). Closed system theories predominated through the 1950s but came under increasing criticism thereafter for inadequate attention to the influence of environmental factors on organizational behavior and performance (Scott 1961; Scott 2004). From the 1950s onwards, organizational theorists instead borrowed paradigms from the natural sciences to conceptualize organizations as social organisms, which led to open systems theories (Katz and Kahn 1966; Kast and Rosenzweig 1972). Open systems theories view the organization as embedded in an environment, and thus part of a system that includes other organizations as well as political, economic, social, and cultural institutions (Scott 2004). The organization itself is also conceptualized as a system, composed not just of individual workers but also of formal and informal groups of individuals and of processes for exchanging resources with the environment (Katz and Kahn 1966). Open systems theories seek to explain the behavior of individuals within the organization in terms of influences from the environment (such as professional socialization, gender and ethnic identities, exchanges with other organizations, or technological innovation) (Flood and Fennell 1995). From an open systems perspective, the behavior of the organization as a whole is explained by its efforts to manage its relationship with its environment, including other organizations (Katz and Kahn 1966; Lawrence and Lorsch 1967; Pfeffer and Salancik 1978). Such efforts might include strategic alliances, public relations initiatives, vertical or horizontal integration (or differentiation), collaborative learning networks, or mimetic isomorphism (i.e., imitation of similar organizations) (Lawrence and Lorsch 1967; Meyer and Rowan 1977; Pfeffer and Salancik 1978; DiMaggio and Powell 1983). Activities that span the boundary between an organization and its environment merit special attention in open systems theories (Scott 2004). A key feature of open systems theories is that the interventions required to improve organizational performance are contextual, i.e., dependent on the specific constellation of environmental factors and internal group dynamics of the selected organization (Drazin and Van de Ven 1985). In addition to the evolution from closed to open systems theories, OB/OT has moved between so-called "rational" and "natural" system theories over time. Rational system theories assume that individual, group, and organization behaviors result from calculated evaluations of (monetary and non-monetary) costs and benefits in pursuit of knowable objectives (Scott and Davis 2006). By contrast, natural system theories posit that rational calculation is impossible, imperfect, or infrequently used to guide individual, group, and organization behaviors; instead, behaviors are determined by emotions, norms, and beliefs (Scott and Davis 2006; Cohen et al. 1972; March 1978). The debate between rational and natural system theories is ongoing, with empirical evidence offering some support for each. 8 Contemporary organizational management practices draw on lessons from open, closed, rational, and natural system theories. Each of these categories includes theories that have been advanced, at one time or another, as general explanations of organizational dynamics (Table 3). Table 3: Summary of Key OB/OT Theories RATIONAL SYSTEM THEORIES NATURAL SYSTEM THEORIES CLOSED Scientific management (1910s-1920s) Human relations school (1930s) SYSTEM Administrative behavior theory (1950s) THEORIES Transaction economics (1970s-1980s) Social psychology of organizations (1970s) OPEN SYSTEM Resource dependency (1980s-1990s) Contingency theory (1970s) THEORIES Population ecology (1990s) Institutional theory (1980s) Network theory (2000s) With the benefit of hindsight, it is clear that none of these approaches alone explains or predicts behavior in every organization. Instead, they highlight the need for researchers and practitioners to consider multiple possible explanations for a given organizations performance, including factors internal and external to the organization, the potential for competing objectives among organization members, and the use of both objective and subjective decision-making processes. Understanding this theoretical history offers organizational managers a way to protect against "blind spots" as they diagnose and address performance problems. Illustrative questions drawn from closed, open, rational, and natural system theories that policy makers and managers might consider in assessing organizational performance and designing interventions to improve performance are reflected in Table 4. Table 4: Sample Diagnostic Questions about Organizational Performance based on OB/OT RATIONAL SYSTEM THEORIES NATURAL SYSTEM THEORIES Are performance incentives in place? What informal social groups exist among employees? What norms Is a system of performance monitoring govern behavior in these groups? functioning? Are these norms compatible with CLOSED Are tasks divided and allocated organization objectives? SYSTEM efficiently? THEORIES Are employee needs and aspirations (OB) Are employee tasks well-defined? expressed, understood, and addressed? Are employees appropriately trained for their assigned tasks? Have standard operating procedures or decision making guidance been Are measures for continuous quality provided to employees? improvement in place? OPEN What resources are necessary for the What constitutes social legitimacy SYSTEM organizations survival? for this organization? 9 THEORIES What processes does the organization use Is the organization respected by its for obtaining needed resources? clients, collaborators, and competitors? (OB and How is the organization supported OT) (materially and administratively) and Are there laws or regulations that monitored by the government? constrain or enable the organizations behavior? How efficiently does the organization transact with its environment? Is the organization explicitly trying to imitate and learn from other How well does the organizations organizations in its field? structure fit with its environment? II.B. COMPARING OB/OT WITH ECONOMICS, PSYCHOLOGY, AND SOCIOLOGY The disciplines of economics, psychology, and sociology have also been used to explain how organizations and the individuals within them perform, and thus share many features with OB/OT. OB is closest to individual-level approaches from psychology and behavioral economics while OT is more similar to organization-level approaches in sociology and neoclassical microeconomics. As discussed above, OB/OT includes both rational and natural system theories; rational system theories accord with the key assumptions in neoclassical microeconomics of optimizing behavior by individuals and firms, while natural system theories share assumptions with psychology, behavioral economics and sociology about the importance of emotions, norms, and social relationships. However, important distinctions remain between these disciplinary approaches to explaining organizational dynamics. Economics Economic models of organizational performance are distinguished from OB/OT and other disciplinary approaches by their assumption of utility-maximizing choices (for individuals) or profit-maximizing choices (for firms or organizations) subject to various constraints and technical production functions (Varian 1992; Pindyck and Rubinfeld 2000). Individuals or organizations chosen actions are taken as a reliable indication of their preferences, given the constraints and technological realities that they face (Varian 1992; Pindyck and Rubinfeld 2000). Constraints can include limited time, financial resources, technology, or information. Strategic interaction among individuals or firms is accommodated in game theory economic models. Economic research on health service delivery organizations has mainly focused at the micro- level on provider payment arrangements, and at the macro-level on questions of market regulation and competition, technology adoption, insurance incentives, ownership structure, service pricing, and production efficiency. At both levels, prices and opportunity costs (monetary and non-monetary cost) play an important role in determining behavior. By contrast, OB and OT more commonly include non-price considerations in models of the behavior of organizations and their members and subsequent organizational performance. For example, OB explanations of physician behavior may incorporate factors beyond payment arrangements such as technical expertise, professional norms, intra-organization group conflicts, personal ethics, legal requirements, patient relationships, and historical hierarchy. In theory, all of these factors can be accommodated in economic models; however, because these non-price decision inputs are less easily quantified, they are less amenable to econometric analysis and 10 hence appear less frequently in empirical economic studies. At the macro-level, OT differs from economics in terms of its understanding of the organizations interactions with its environment. In economic theory, the organizations environment is conceptualized in terms of the markets in which it operates. An organizations interactions with its environment focus on obtaining the inputs it needs and selling the outputs it produces under the prevailing regulations and prices (Caves 1998). OT considers how an organizations reputation and legitimacy affect its ability to obtain resources, produce outputs, and retain political and social legitimacy; OT also recognizes that organizations pursue goals other than profit maximization (Scott 1961; Weber 1925). While goals other than profit-maximization are modeled in some economic literature on non-profit health care delivery organizations, it is usually presented as inefficient and sub-optimal performance (Newhouse 1970; Folland et al. 2006). Psychology Models of organizational behavior from psychology emphasize the role of individuals and the perspectives and historical experience that influence individual behavior within organizations. Psychological models, which accept the influence of the unconscious on behavior, claim that individuals cannot always choose how they behave, or consciously know why they are behaving as they are. Such models often conflict with economic theories that assume behavior is produced by conscious choice and hence subject to individuals calculated actions. Several subfields of psychology have relevance for organizational dynamics. Behavioral psychology focuses on individuals externally observable behaviors in response to environmental stimuli; through conditioning (rewards and punishment), behavioral psychology suggests that individuals can be taught to behave in particular ways in response to specific stimuli (Skinner 1953). Cognitive psychology, on the other hand, focuses on the minds less observable perception and cognition processes, including how individuals unconsciously acquire information and make decisions (Freud 1923). Organizational psychology looks at both individual and group behaviors, modeling group dynamics on the basis of the individuals in those groups and the (unconscious and conscious) memories and experiences that those individuals bring to group settings (Munsterberg 1913; Lewin 1935, 1936; Landsberger 1958). Although there are competing schools within psychology, most are compatible with OB theories of micro-level organizational dynamics and with some elements of OT. OT shares some concepts with organizational psychology in terms of group dynamics that transcend individual organizations and are influenced by the environment. OT is also influenced by the implications of behavior guided by the unconscious, because this individual-level phenomenon can help explain organization-level behavior that seems to deviate from "rational" organizational objectives of survival and from the "optimal" means of achieving organizational objectives, particularly in the area of group relations and power and their influence on organizational behavior (French and Raven 1959; Pfeffer 1981; Mintzberg 1985; Smith and Berg 1987). In terms of research on health service delivery organizations, psychology research has typically focused on patient-provider relationships, human resource management, leadership, and employee motivation. This research underscores the non-medical determinants of patient outcomes as well as the non-financial determinants of provider performance, offering a different set of potential policy levers for improving health services. 11 Sociology Sociological models of organization behavior are distinguished from psychology, economics, and OB/OT by the primacy they accord to social and community structures to explain individual and organizational behaviors. Sociological approaches emphasize how individuals behaviors are shaped by gender, race, ethnicity, age, religion, social class, and the attendant socially constructed meanings of these categories (Weick 1976; March 1991; Anderson 1999; Axelrod and Cohen 2000). This emphasis on social groups and structures has helped explain why organizations with similar material resources, trained personnel, management practices, and patient profiles may perform differently when their employees social backgrounds are different or when they operate within different social environments. At the macro level, sociological models routinely situate organizations in their environment, looking at how organizations are affected by their social environment and how they in turn affect their environments (for example, by creating new forms of wealth, social capital, or social disparity) (Weber 1925). In the health services sector, sociological studies have focused on the non-medical determinants of health (such as poverty, social class, and stigma), barriers to accessing existing health services, socially constructed roles of sickness, and the experiences of different sub-groups in seeking and providing health care (Parsons 1951; Waitzkin 2000, 2001). OB and OT are both influenced by sociology but view social structures as only one factor shaping the organizations internal and external environments. Historically, OB research has focused on individuals organizational roles (e.g., manager) rather than the socially defined attributes of the individual in that role (e.g., gender, race/ethnicity), and OT research has examined the organizations interactions with other organizations in its environment rather than with the community or social structures. However, network theory research is one area of OB/OT that integrates analysis of both social and organizational structures to detect patterns of behavior, power, and information flow (Nohria 1998; Shortell and Rundall 2003). Organizational dynamics are complex and require multidisciplinary approaches. Understanding some of the unique strengths and paradigms of economics, psychology, sociology, and OB/OT as applied to the study of organizations can help managers, policy makers, and researchers consider a wider range of potential determinants of organizational performance and design more effective interventions to improve performance. We offer an example (Table 6) of how these different disciplines might view a common challenge in health care to highlight potential divergences and synergies among disciplinary paradigms; it is suggestive rather than definitive of how experts in these fields might respond. This hypothetical example illustrates the challenges in determining the root causes of poor performance. Multiple causes may underlie an individuals observed performance, and similar performance levels across individuals may have differing causes. Without a full understanding of root causes and possible capacities, it is not possible to say which intervention will be effective in addressing a hypothetical situation, although considering multiple disciplinary perspectives can help to identify a broader range of likely causes and solutions. 12 Table 5: An Example of Different Disciplinary Approaches to Improving Performance of Health Services Delivery Organizations Organizational Behavior: A government nurse running a primary health clinic in a rural village charges some patients extra fees above the official government price, secretly retaining the surplus for himself. Organizational Intermediate Outcomes Affected: Quality, Efficiency, Utilization, Access, Learning, Sustainability Disciplinary Possible Explanations of the Behavior Possible Interventions to Alter Behavior Approach (1) Nurses salary is too low or too infrequently paid, and punishment (1) Increase nurses salary, pay it regularly, and increase monitoring of and for corruption is too lax. punishment for corrupt practices. Economics (2) Government prices are set too low; some clients are willing to pay (2) Increase official prices and provide targeted subsidies to those who more. cannot afford them. (3) Nurse is a de facto monopolist of officially sanctioned health (3) Permit and incentivize competition from other accredited (private) services delivery in the village, allowing him to price discriminate providers in that village/region. among clients. (1) Nurse has developed a positive psychological association with (1) Provide individualized therapy to develop alternative positive Psychology having power, or dominating other people. associations. Promote professional and community norms of shame (2) Nurse has learned a behavior in response to historical around corruption and exploitation of others. environmental stimuli. He grew up in conditions of financial (2) Provide group or individualized therapy to alter response to stimuli of insecurity; he takes extra fees to feel more secure. felt insecurity. Institute and apply systematic punishment for corruption. (1) Nurses behavior is consistent with social norms and hierarchical (1) Promote anti-corruption social norms among civil servants and during class structure in which he, as an educated professional, is not professional training of nurses. Educate and empower village residents Sociology accountable to the poorer, less educated village residents. to assert their rights. (2) Nurse is from the ethnic group that holds political power, while (2) Highlight to leaders of the ethnic group in power the political risks of many village residents are of an ethnic group associated with the corrupt public service provision, which can fuel opposition mobilization. opposition party. Nurses behavior is a local manifestation of Encourage multi-party oversight of social services. national political dynamics. (1) Nurses professional peers also charge extra fees to their patients; (1) Identify ,,positive deviants (nurses who do not charge extra fees). they see it as their right because the government does not pay them Promote their strategies and the conditions that enable them to resist peer as they deserve. Discussing this common grievance and practice pressure. Engage third party to mediate dialogue between nurses and allows the nurses in the district to feel solidarity with each other. government. Create alternative shared experiences to generate solidarity OB/OT (2) Nurse does not feel ownership of the clinics mission. among nurses. (3) Nurse knows that transfer to an urban post is reserved for those (2) Understand the needs of the nurse that the clinic can accommodate. with high-level political connections. Because good performance Improve the working environment and culture so that the nurse wants to will not enable him to advance in the system, he tries to make the support the clinics mission and feels a part of the organization, rather best of his current situation. than an individual. (3) Create a transparent, merit-based system of promotions. 13 II.C CONCEPTUAL FRAMEWORKS OF ORGANIZATIONAL PERFORMANCE WITHIN HEALTH SYSTEMS Many conceptual frameworks of health systems recognize that organizational performance plays a critical role in health system performance; however, most frameworks are relatively high-level in their description of organizational dynamics and their influence on health system performance. For instance, the six building blocks of the WHO Health System Framework (Figure 2, De Savigny and Adam 2009, p. 31) include several components that are relevant for organizational performance such as service delivery, health workforce, information, and leadership/governance. Nevertheless, these are not a comprehensive set of organizational levers that may determine health services performance, nor does the model discuss how these building blocks may explain variation in organizational intermediate outcomes. The WHO frameworks ,,systems thinking approach is compatible with organizational theory but it does not address organizational dynamics directly. Figure 2: WHO Health System Framework System Building Blocks Overall Goals/Outcomes Service Delivery Access Improved Health (level and equity) Health Workforce Coverage Responsiveness Information Medical Products, Social & Financial Vaccines & Risk Protection Technologies Quality Financing Safety Improved Leadership/Governance Efficiency Organizational dynamics are given greater prominence in other health system models, such as the five "control knobs" for health sector reform defined by Roberts, Hsiao, Berman, and Reich (2004). As shown in Figure 3, organization is one of these knobs, encompassing "the overall structure of the health-care system" as well as "the individual institutions that provide health- care services" (Roberts et al. 2004, p. 212). In their model, organization-level interventions are explicitly connected to health system performance. Proposed interventions include changing organizations ownership, scope, or scale; increasing competition or contracting; and improving internal management practices. All of these interventions are congruent with recommendations from either organizational theory or organizational behavior; however, OB/OT offers greater depth in understanding each of these interventions, particularly management practices, and suggests additional strategies for organizational change beyond those discussed in the control 14 knobs model. The control knob model views organizations as important in influencing health system performance but focuses on only a few dimensions for influencing organizational performance. Figure 3: Control Knobs Health System Framework THE HEALTH SYSTEM TARGET POPULATION Financing Efficiency Health Status Payment Quality Organization Customer n Satisfaction Regulation Access Risk Protection Behavior Intermediate Performance Performance Control Knobs Measures Goals The U.S. Institute of Medicine (IOM) offers a third framework for how organizations impact health care delivery systems (IOM 2001, p. 118). As shown in Figure 4, in the IOM framework organizations are impacted by environmental factors like regulation and payment mechanisms, and in turn impact the performance of care teams within the organization. In contrast to the WHO and control knob health system frameworks, the IOM framework focuses more narrowly on the care delivery system; however, it is helpful in drawing attention to the linkages between factors external and internal to the organization. Figure 4: Institute of Medicine Care System Framework CARE SYSTEM Outcomes Safe Effective Organizations Efficient High Supportive that facilitate Personalized performing payment and the work of Timely patient- regulatory patient- Equitable centered teams environment centered teams 15 Drawbacks of these frameworks are the lack of feedback effects among the inputs, intermediate outcomes, and final outcomes, and the lack of attention to the environment beyond the health system. From an OB/OT open systems perspective, environmental conditions are neither static nor independent of organizational actions; changes in health service organizational performance intermediate outcomes can change conditions in the health system and broader social environment. Using the health service delivery organization and the six performance intermediate outcomes defined above, we see the relationship between the organization and the health system as being more appropriately represented as in Figure 5. Although this open systems framework is complex, it incorporates the reciprocal relationships between the environment, health system, and organization, as well as the feedback effects of organizational intermediate outcomes and health system outcomes on these different levels. These health system models generally take a macro perspective; however, improving organizational performance requires both micro and macro perspectives. To use the metaphor from the WHOs Systems Thinking report, understanding organizational dynamics requires both system-level "forest thinking" and organization-level "tree-by-tree thinking" (De Savigny and Adam 2009, p. 43). The next section describes some of the relationships between these two levels of thinking in terms of the determinants of organizational performance. Figure 5: An Illustrative OB/OT Perspective on the Organization in the Health System ENVIRONMENT Political System HEALTH SYSTEM Economy Governance Regulation Financing Payment Workforce Education System INTERMEDIATE FINAL ORGANIZATION OUTCOMES OUTCOMES Structure Quality Improved Community People Efficiency Health Status Groups Materials Utilization Technology Access Risk Processes Learning Protection Sustainability Infrastructure 16 II D. DETERMINANTS OF HEALTH SERVICE DELIVERY ORGANIZATION PERFORMANCE Improving organizational performance requires an alignment among environmental conditions, implementation capability, and choice of strategy targeting organizational performance (Figure ). The framework in Figure intentionally does not indicate causality or directionality among the elements. Typically the strategy will be the ,,choice variable with environmental conditions and implementation capabilities taken as ,,givens but interventions are possible in each of these three areas. Policy makers and organizational managers can create environmental conditions that are more (or less) conducive to organizational performance, build the capability of individual organizations to implement chosen strategies, or select strategies that can work given the organizations environment and capabilities. However, if a particular strategy is desired or already in place, it may be necessary to intervene at the level of the environment or of the implementation capability of the organization to enable the selected strategy to succeed. Any of the three dimensions have the potential to be held fixed or to be designated as the point of intervention in a given performance improvement program; however, compared with strategies and implementation capabilities, the organizational environment is generally more difficult for organizational managers to change. Figure 6: Aligning Determinants of Organizational Performance Strategy Organizational Performance Environmental Implementation Conditions Capability The need for alignment along these three dimensions has several implications for designing performance improvement programs. One implication is that organizations in the same environment may need to use different strategies to enhance performance based on their differing levels of implementation capability (or conversely, implementation capabilities need to be made uniform if all organizations are expected to implement an identical strategy in a similar environment). A second implication is that the set of initial conditions is not deterministic of the choice of performance improvement strategies or set of interventions. Among two organizations with identical environmental conditions, one might choose to start executing a strategy with its existing capabilities while another might choose to first enhance its implementation capability in order to execute a more challenging strategy. It is likely that the environment may be relatively fixed over the period of many performance improvement programs; however, interventions at the level of the environment (e.g., regulatory or payment reform) are possible and should be 17 considered especially when altering implementation capability or strategy is likely to be difficult without such environmental alterations. A third implication of the importance of alignment is that performance improvement strategies cannot be designed in a vacuum as ,,ideal types; rather, they must be selected with consideration for the specific organizations environment and implementation capacity. A final implication is the need for flexibility in the implementation of performance improvement programs. Although two of the three dimensions are held ,,fixed for purposes of designing a performance improvement program, in reality all three are moving targets. As a result, performance-enhancing interventions will need to be aligned and then periodically realigned as conditions evolve. II.E. CREATING ENABLING ENVIRONMENTAL CONDITIONS The organizations environment can either facilitate or inhibit performance. As discussed above, the environment includes not only the elements of the health system but also the broader political, economic, and socio-cultural systems in which the organization operates. Interventions at the environment level to enhance the success of organizational performance strategies could include the creation of licensing standards, safety regulations, and institutional accreditation; decentralization of decision making to facility level; investment in workforce development and the health professional educational pipeline; deployment of data collection infrastructure and health management information systems; improvement of procurement and supply chain management processes; use of performance-based funding/contracting; or privatization of state- owned health facilities. Although these are not the focus of this report, substantial literature exists to guide the design, implementation, and evaluation of such environmental interventions at the macro-level of the health system (for example, Roberts et al. 2004; De Savigny and Adam 2009; Joumard et al. 2010). For our purposes, the critical insight is that some environmental conditions are malleable, especially over the medium- to long-term, and therefore strategies to improve organizational performance should also consider interventions to create an enabling environment. 18 III. STRATEGIES FOR IMPROVING ORGANIZATIONAL PERFORMANCE: A CLASSIFICATION SYSTEM Based on our reading of the literature, we identify seven broad categories of potential strategies for improving organizational performance. We define strategies as a set of activities or interventions that together are designed to achieve a pre-specified objective. The seven categories are each associated with a certain mindset, or mental model, drawn from particular academic disciplines; each strategy will be optimally effective under particular conditions (Table 6). Underlying each strategy are assumptions about both the drivers of human behavior and the root causes of poor organizational performance. Although each strategy targets a primary root cause, a given strategy may be able to address multiple root causes and a given root cause may be amenable to several strategic responses (Table 7). Furthermore, the strategies are not mutually exclusive; several strategies may be used together depending on environmental conditions and implementation capability. Although this report focuses on the micro-level of the health care facility, these strategies can also be applied at broader levels of the health system. For instance, at the meso-level, a sub- national district health system might develop a strategy for multiple facilities in its jurisdiction that are facing similar root causes of low performance. At the macro-level, a Ministry of Health might initiate a national strategy to improve certain elements of performance in health facilities across the country. A major challenge in moving from micro- to meso- and macro-levels is maintaining alignment of the strategies selected with the relevant environmental conditions and implementation capabilities of the targeted level of organization. To ensure alignment, decision makers at the meso- and macro-levels should have valid facility-level data infrastructure and participation from facility-level staff to develop and implement effective strategies for improving organizational performance. These strategies can also be used to improve the performance of organizations operating at the meso- or macro-levels of a countrys health system. A district health agency or national Ministry of Health represent meso- and macro-level organizations whose performance is determined by many of the same factors as hospitals or clinics. For example, low levels of staff motivation can be a root cause of poor performance within a single hospital or within the district or the national health agency. The strategy areas that map to this root cause (Table 7) are equally applicable at any of the three levels. However, the substantive domains and metrics of performance for meso- or macro-level administrative organizations will differ to some extent from those defined in Section I for micro-level care facilities. 19 Table 6: Typology of Strategies for Improving Organizational Performance STRATEGY DISCIPLINARY CONDITIONS FOR EXAMPLES OF AREA MINDSET KEY ELEMENTS EFFECTIVENESS INTERVENTIONS Identify processes that can be standardized Processes can be Clinical pathways standardized Develop standard operating procedures Standard operating Standards and Law, science, and Adherence to guidelines procedures, e.g., Train staff on standards and guidelines guidelines ethics can be assessed and admissions, Incorporate adherence to guidelines into staff monitored warehousing, waste and institutional performance criteria disposal, patient Moral rationale exists for records regulating behavior Select functional or cross-functional structure Cross-functional Integrated care teams collaboration is important within hospitals for Determine lines of reporting for staff and the Organizational OB/OT, for implementation specific diseases, with effective span of control for managers dedicated design management Organization is large and Align responsibility and authority in each role management and production/service administrative delivery processes are support complex Provide high-quality pre-service training Skills and/or knowledge In-service training for linked to competencies and socialization into gap is root cause of doctors, nurses, norms of professionalism implementation problem midwives Implement system to identify knowledge and Staff can be educated and Provision of learning Education and Education and skills gaps and to fill through in-service are already materials / access to training public health training interested/motivated new technical knowledge resources Facilitate staff access to new technical for clinical staff knowledge through information resources and learning events In-country training programs 20 Table 6: Typology of Strategies for Improving Organizational Performance (continued) STRATEGY DISCIPLINARY EXAMPLES OF CONDITIONS FOR AREA MINDSET KEY ELEMENTS INTERVENTIONS EFFECTIVENESS Implement measurement of process indicators Human-proof/clever Data capture and systems can be designed feedback mechanisms Process Identify opportunities for process and implemented cost- improvement improvement Cell phone/PDA and Engineering, effectively disease surveillance technology management Identify and obtain needed tools, equipment, and tool and materials Reminder systems development Develop and test new processes and Plan-Do-Study-Act technologies (consider borrowing solutions (PDSA) cycles from other organizations) Define performance objective Incentivized behavior is Payment conditional aligned with objective on achievement of Identify relevant incentives related to targets, e.g., objective based on input from staff Incentivized staff has immunization, control over outcome Design incentive scheme and align staff prenatal visits, Incentives authority with level required for target Outcome is reliably assisted deliveries Economics, behavior measured Private wings in behavioral (monetary or psychology Implement incentive scheme and monitor Gaming is limited public hospitals to non-monetary) performance keep physicians from Incentives are affordable leaving hospitals for private practice Creation of reliable monitoring systems for organizational outputs 21 Table 6: Typology of Strategies for Improving Organizational Performance (continued) STRATEGY DISCIPLINARY EXAMPLES OF CONDITIONS FOR AREA MINDSET KEY ELEMENTS INTERVENTIONS EFFECTIVENESS Survey staff and management attitudes Reference groups can be Enhance supportive towards and beliefs about the organization identified and engaged supervision and and its work accountability Team-based work is Identify formal and informal structures, required Strengthen teamwork processes, group dynamics, and OB/OT, Leadership Embed quality communication patterns that contribute to Organizational sociology, improvement staff attitudes and beliefs culture anthropology, principles and psychology Develop (with input from all staff) a vision practices for the organizations objectives and the organizational culture that would facilitate achievement of those objectives Determine changes needed to structures, processes, groups, and communication to create the desired culture Establish leadership/management roles in Government must be Develop and support health facilities including revised lines of willing to devolve executive role at responsibility and authority management facilities (e.g., responsibility and hospital CEOs) Equip leaders and managers with necessary Leadership authority OB/OT, autonomy and authority to develop and Create community and achieve organizational mission Monitoring systems for management management management accountability must be committees for local Develop problem solving skills at facility credible health facilities levels Legal systems in place to Training, mentoring, ensure accountability and and coaching recourse programs 22 Table 7: The Relationships between Root Causes and Strategy Choices ROOT CAUSE OF PERFORMANCE GAP Evidence Authority and Staff do not Tools and Staff are not Organization Staff lack about best accountability have required technology that motivated to does not support guidance STRATEGY practice does for action is not skills and allow staff to perform staff in and vision AREA not exist or formally aligned knowledge for perform at assigned performing for their has not been with staff assigned tasks standard are tasks assigned tasks work disseminated responsibilities lacking Standards and X guidelines Organizational X design Education and X X X X training Process improvement and X X technology and tool development Incentives X X Organizational X X X culture Leadership and X X X X X X X management 23 IV. IDENTIFYING AREAS FOR PERFORMANCE IMPROVEMENT PROGRAMS: METHODS This section summarizes methodological approaches that can be used to determine the root causes of performance gaps, select an appropriate performance improvement strategy, and assess the progress and effects of those strategies over time. The preceding sections of this report have identified the key organizational intermediate outcomes to be measured and the determinants of organizational performance as viewed from different conceptual and disciplinary perspectives. This section presents a way of bridging between determinants and intermediate outcomes through the use of tailored methodologies in each stage of the performance improvement process. First, decision makers at both the organization and health system levels first need to understand why organizations are performing well or poorly. Second, decision makers need to design feasible strategies that address the underlying reasons for performance gaps. Finally, systems must be in place to measure implementation progress, to judge if the selected strategy is producing its intended effects, and to facilitate learning and adaptation. These steps and their associated methods create the foundation for effective performance improvement programs in health service delivery organizations. IV.A. SELECTING MEASURES FOR ASSESSING ORGANIZATIONAL PERFORMANCE Many choices for metrics exist in each of the six organizational performance domains. Different metrics may be more or less appropriate or feasible in different country and organizational contexts. We do not restrict our analysis to metrics that had proven cross-country reliability and validity; instead, we report results from a broader set of articles, some of which may only be applicable in contexts very similar to those in a given study. This approach is intended to facilitate selection of metrics that are best suited to a given country and organizational context rather than to identify a universally valid set of indicators. We recommend several principles below as guidance for selecting metrics for inclusion in assessments of organizational performance. Principle 1: Include Performance Metrics from Each Intermediate Outcome Domain Considering performance metrics in each of the six domains is important for guiding the process of identifying performance gaps and improvement priorities. Including metrics from each domain can reveal performance issues not previously apparent or potential synergies for intervention in multiple domains. Where feasible, multiple metrics from each domain should be included. This is particularly important for understanding organizational performance comprehensively, as investing resources in one area (for example, improving access) may limit resources spent in another intermediate outcome area (for example, improving quality). There may also be linkages among the different performance domains such that underperformance in one domain may contribute to low performance in other domains. For example, inefficiency may impair the organizations sustainability, poor quality could reduce utilization, and low utilization could limit the organizations opportunities for learning. The strength and directionality of these linkages will vary by organizational context. Understanding the whole of organizational performance requires attention to all six domains. 24 Principle 2: Use Existing Data Sources Where Possible When possible, existing data sources should be used to reduce the time and resources required for organizational assessment. Often data will be generated in the process of health service delivery so the act of producing organizational outputs can also provide the data needed to assess performance. However, existing data must be complete, accurate, and timely to be adequate for the intended purpose. If existing data do not meet these conditions, different metrics or data collection processes and/or additional investment in data infrastructure will be needed. As building data infrastructure can be a long-term process, organizations may need an intermediate plan for collecting data while the human and technological capacity for more permanent and sophisticated infrastructure is developed. Principle 3: Test Reliability and Validity of Metrics in the Context of Interest When adapting or applying a metric from one context to another, it is critical to test the metrics reliability and validity in the new context. This can be accomplished by using the metric in a pilot assessment with a smaller group of respondents or facilities. Reliability refers to the consistency of a measure when used in repeated applications while validity is "the degree to which a measure assesses what it purports to measure" (Fink 2005 p. 147). Appendix 9 provides a list of sources that can guide users through the theory and practice of testing the reliability and validity of candidate metrics. If measures have not been used previously in a specific population, it is also important to test for cultural equivalence and relevance of the concepts and language used in the metric. Qualitative methods are especially valuable for determining whether the constructs underlying an assessment are salient and acceptable in a given cultural context, and whether these constructs are expressed in a format and language that is appropriate to the intended audience (Curry et al. 2009). Evaluating existing instruments using cognitive interviewing methods (Schwarz and Sudman 1996; Sudman et al. 1996) is also a useful tool for assessing cultural equivalence or relevance and uncovering limitations in survey constructs or item construction. Principle 4: Weigh Costs and Benefits of Internal and External Data Collection One consideration is whether data should be collected by individuals internal to the health service delivery facility or by an external party. There are benefits and costs to each approach. Data collection by health facility staff has the advantage of involving staff in the discovery of performance gaps and improvement progress, which may have positive spillover effects in terms of staff motivation and ownership of organizational change initiatives. Integrating data collection into existing care processes may also be less costly if it can leverage existing information management infrastructure. In addition, if the selected metrics require specialized skills to be able to measure (for example, in the case of clinical quality) it may be difficult to find qualified external assessors. Relying on internal data collectors can also develop measuring and monitoring capacity in the country. However, some metrics may not be reliably reported by health facilities themselves, especially in cases where health facilities face performance incentives that deter them from reporting negative results. In such cases assessment by an external party is important. If the external party conducts the same assessment in multiple facilities, he or she may also acquire a cross-facility 25 perspective that can be valuable in analyzing trends or identifying best practices. The drawbacks of external data collection include its cost and the potential disconnect between data collection and performance improvement efforts. External assessors may also suffer from a lack of local or historical knowledge of the organizational context that impairs their ability to detect underlying causes of performance differences. Principle 5: Engage Stakeholders in Assessment Process Using participatory methods for design and execution of the assessment can increase stakeholders ownership of the assessment results and commitment to improving performance (Fink 2005). Although difficult to involve all stakeholders in the selection of metrics, a variety of methods exist to solicit input and feedback from representative groups of stakeholders on the assessment process (Minkler and Wallerstein 2003; Israel et al. 2005). Involving stakeholders in the early stages of metric selection and assessment processes can also create networks that facilitate subsequent diffusion of best practices and performance improvement interventions (Bradley et al. 2009). When involving stakeholders in metric selection, conflicts of interest are always possible; stakeholders may select metrics that suit their own interests rather than those that rigorously assess organizational performance. Ensuring representation from multiple stakeholder groups may be one way to offset such tendencies; embedding the assessment in discussion of a shared vision for health service delivery organization performance may also be effective. Piloting or incorporating multiple metrics for the same performance dimension may also be helpful to verify that the results do not differ dramatically when an alternative metric is used (Vitikainen et al. 2009). Principle 6: Estimate Resources Required for Data Collection As the methods used to collect data on different metrics are of variable time- and resource- intensity, anticipating the resource implications of different metrics is an important step in the process of developing any assessment of organizational performance. Estimates of resources for adequate performance assessment should be developed not only for the short-term investment of current assessment but for the longer-term process of ongoing performance improvement. Putting in place appropriate data infrastructure could include investments in new staff or staff training, new technology, new forms, and/or new processes of data collection. Organizations should also consider the resources required to integrate their performance assessment data collection efforts with their national Health Management Information Systems over the long run. Principle 7: Align Data Collection Methods to Fit with Domain Multiple data collection methods are possible for each domain although certain methods may be more suitable than others for selected contexts. For example, efficiency and utilization rely on quantitative data about the volume of services provided; these quantitative data typically can be collected from primary or secondary sources at the facility or a government agency. Primary and secondary quantitative data may also play a role in measuring quality, access, learning, or sustainability depending on the metric chosen; however, these latter four performance intermediate outcomes will typically involve some degree of qualitative data. For example, measuring quality via patient experience could require observing patients interactions with health facility staff in addition to quantitative assessment via surveys. Measuring access in terms 26 of the availability of services might include speaking with focus groups of community members about their care-seeking experiences as well as observing staff absences at the facility. In short, the data collection methods must fit the metric, and the necessary human and technical capacity must exist to be able to apply the appropriate methods. Considering appropriate data collection methods as part of the process of selecting metrics can also help identify synergies where the same process can be used to gather data on multiple metrics. IV.B. IDENTIFYING PERFORMANCE GAPS USING DIAGNOSTIC ASSESSMENT RESULTS Once a diagnostic assessment has been conducted, the results must be compared against some standard to determine if there are performance gaps. There are three general comparison strategies for identifying performance gaps: within-country comparison, cross-country comparison, and comparison against domestic or international technical standards. Within-Country Comparison Within-country comparison looks at results across health service delivery organizations in a given country, using the top-performing organization or the average performance level among organizations as the standard of comparison. In each of the six performance domains, organizations would be compared against a frontier performance level or production function. This approach has been widely used to model health facility efficiency, and in some cases learning, with the frontier defined either by the organization with the best results or by the average performance level among organizations in the sample (Vitaliano and Toren 1994; Zuckerman et al. 1994; Pisano et al. 2001; Rosko and Mutter 2008; Vitikainen et al.2009; Bernet et al. 2010). If the organizations in the sample are very different in terms of size, location, or population served, it may be necessary to group the organizations on the basis of key shared characteristics that are salient to the content of the assessment and then identify a best performer in each group, or use an analytic method that controls for these characteristics (Newhouse 1994; Rosko and Mutter 2008). Policy makers should also examine the distribution of performance levels across organizations covered by the assessment to determine if patterns exist that might point to possible determinants of performance (Rosko and Mutter 2010). The advantage of within-country comparison is that it controls for many, though not all, elements of the environment that impact organizational performance. The disadvantage of this approach is that it does not reveal whether the best performing organization is ,,good enough relative to technical standards of health services delivery. For example, a hospital may provide the best clinical quality in a given country but still fall short of international standards of care. This within- country comparison is therefore best suited to performance domains like efficiency, utilization, learning, and sustainability for which defined technical standards are less likely to exist. Cross-Country Comparison Using diagnostic assessment results to compare against organizational performance in other countries is an attractive technique when the degree of performance variability in the focus country is limited. For example, if all rural health clinics in a country perform in a narrow range of utilization, it is difficult to compare them against each other to know whether this level of performance is high or low. Comparing against a neighboring country can provide perspective on whether more could be done in this performance domain, and inspiration for performance 27 improvement strategies to adopt. The value of comparing diagnostic assessment results with organizational performance in other countries is enhanced when the other countries have similar population health needs, health system characteristics, and political and economic environments (Joumard et al. 2010). One disadvantage of this approach is that it tends to rely on aggregate or average performance scores for each country in the comparison. Such aggregation can obscure the distribution of performance among organizations in each country, which is important because the degree of variation in performance can point to environmental conditions (e.g., presence or absence of regulation) that may be influencing facility-level performance. As for within-country comparison, the cross-country approach to identifying performance gaps is best adapted to the domains of efficiency, utilization, learning, and sustainability for which universal technical standards do not exist. Comparison with Domestic or International Standards Where technical or legal standards exist, they offer a good benchmark against which to compare diagnostic assessment results. This approach is well-suited to performance domains like quality and access, for which such standards tend to exist. In terms of quality, there are domestic and international medical practice guidelines that set standards for the process of clinical care delivery. There are also well-established standards and laws for elements of managerial quality such as financial management and procurement. Intermediate outcomes like access may also be well-suited to this comparison approach if some level of access to health services has been legally guaranteed in a country. This approach is useful because it reveals if the best performing organization in the country is falling short of a domestic or international performance standard. It is also a transparent basis for performance rankings and target setting. However, it cannot be applied effectively to intermediate outcomes that do not have clear technical or legal standards. IV.C. ASSESSING ROOT CAUSES OF PERFORMANCE GAPS Identifying root causes of performance gaps can be a complex, resource intensive, and hence often overlooked, process. There are three key methodological principles that can enhance the likelihood that a systematic and accurate assessment of potential root causes will be accomplished: use of a multidisciplinary team, application of qualitative and mixed methods (i.e., an integrated use of both quantitative and qualitative methods) (Creswell and Piano Clark 2007) approaches, and use of formal scientific problem solving methods such as root cause analysis (RCA) (Latino and Latino 2006). Multidisciplinary Teams Multidisciplinary, diverse teams are essential to successful quality improvement efforts, such as the redesign of care processes, and implementation of new care policies and protocols (Nelson et al. 2002; Shortell et al. 2004; Lemieux-Charles and McGuire 2006; Bradley et al. 2006; Keroack et al. 2007). At the level of frontline health care delivery, ,,multidisciplinary is used in the health services literature to refer to different medical care specialties or occupational roles within the facility (e.g., administrators, doctors, nurses, pharmacists) (Buljac-Samardzic et al. 2010). The composition of teams is a key consideration; representatives should include managers, clinicians, and especially those on the front lines of care delivery who, because of their roles, have important insights into potential sources of performance gaps. Explicit managerial support 28 is critical to team success (Rubenstein et al. 2002; Mills and Weeks 2004; Lukas et al. 2007); teams must be authorized to fully investigate all relevant structural, procedural, and environmental factors that might contribute to identifying the root causes of performance gaps. Qualitative and Mixed Methods The capacity to apply qualitative and/or mixed methods approaches to uncovering root causes of performance problems is essential. Qualitative research is a form of scientific inquiry that spans different disciplines, fields, and subject matter and comprises many varied approaches (Denzin and Lincoln 2000). A qualitative approach can illuminate aspects of organizational context and healthcare delivery that influence organizational performance and quality of care (Sofaer 1999; Curry et al. 2009). Qualitative methods can also identify the potential causal mechanisms that are associated with a given outcome and generate hypotheses about such mechanisms. Compared with quantitative methods, qualitative methods are often better suited to measure complex aspects of health care delivery systems, such as organizational change, clinical leadership in implementing evidence-based guidelines, and patient perceptions of quality of care (Green and Britten 1998; Shortell 1999; Greenhalgh 2002; Pope et al. 2002; Eccles et al. 2003). Mixed methods, in which quantitative and qualitative methods are combined (Creswell and Piano Clark 2007), are increasingly recognized as valuable because they can capitalize on the respective strengths of each approach (Jick 1979). Common qualitative data collection methods include in- depth interviews, focus groups and participant observation. Appendix 10 lists sources that can guide users through the principles and practices of qualitative and mixed methods. Root Cause Analysis RCA comprises a set of formal problem solving techniques that focus on finding and addressing the most important reasons for performance problems or events, rather than simply addressing the symptoms or manifestations of the problem because it may be more expedient or less resource intensive. RCA uses specific analytic tools such as 'fishbone' diagramming of cause and effects, flow charting work processes, and failure modes and effects analysis (Ishikawa 1990; Latino and Latino 2006). These tools prompt analysis of cause and effect systems through exploring 'why' a given event occurs at each level of investigation. As with multidisciplinary teams, groups conducting RCA must include organization leaders as well as those most familiar with the processes and systems under review. IV.D. SELECTING STRATEGIES Performance improvement strategies should be selected to address the root causes of performance gaps, but they also need to consider contextual factors of an organizations environment, implementation capability, and existing efforts to improve performance. Once root causes, environmental factors, implementation capability, and ongoing strategy efforts have been assessed, strategy options can be developed based if possible on the examples of high- performing organizations facing similar internal and external conditions. These strategy options should be compared and evaluated using criteria agreed upon by the stakeholders involved in the performance improvement program. Strategy selection is therefore a multi-stage process with several distinct analytical activities, each with its own associated methods. 29 Assessing Environmental Conditions Each organization has a specific constellation of environmental factors that influences the applicability and likely effectiveness of a given performance improvement strategy. Systematically mapping this array of environmental factors can reveal potential pitfalls in strategies that otherwise seem well-suited to the organizations internal dynamics. Environment factors include the distribution of political power, health system governance arrangements, the prevailing economic outlook, demographic and epidemiological transitions, health care financing and reimbursement systems, and the structure of health care markets. In assessing the environment, decision makers should seek to answer such questions as: What changes in environmental conditions are likely in the short, medium, and long terms? How are other organizations responding or proactively changing? What factors in the environment enable or constrain performance? Which environmental factors, if any, are mutable? These types of questions could be addressed via several methods. Organizations could solicit expert external advice to assist them in mapping environmental trends of which they may be unaware. Organizations can also convene internal discussions among their members or external discussions with other organizations in their industry. Identifying current and possible future trends can help decision makers avoid strategies that are likely to become rapidly obsolete. An additional environmental factor pertains to the organizations history. Organizations may have undertaken performance improvement programs in the past that can offer a source of lessons learned as well as mistakes to avoid. The success (or lack thereof) of past performance improvement programs can also be an important contributing factor to organization members willingness to try new strategies. Assessment of an organizations historical experience should answer such questions as: What performance interventions have been tried before? What were the results? Why did those interventions succeed or fail? How are conditions today similar or different than in the past? What lessons can be learned to apply going forward? Answering these questions typically involves collecting data from an organizations members, either via survey, interview, focus group, or facilitated larger group discussions. One challenge in assessing historical context is that an organizations members change over time. In some cases, there will not be any members with institutional memory of prior performance improvement efforts. In these cases, it may be relevant to draw on members past experiences in other organizations, which may provide applicable lessons for strategy design and a gauge of members likely degree of receptivity to new performance improvement initiatives. Assessing Implementation Capability An organization must be able to implement the selected performance improvement strategy. Implementation capability should be evaluated prior to final strategy selection and in light of the 30 strategy options under consideration. The purpose of the assessment is to identify which strategies could be successfully deployed given the organizations ability and motivations. A strategy can be chosen to fit with an organizations existing implementation capability or designed with a first phase that builds the capability needed to execute the subsequent phases of the strategy. Often, an organization will have some level of slack resources that can be mobilized to facilitate improvements in performance. Slack resources are resources within the organization that are not currently committed to technical production that, if deployed, could move the organization towards its theoretical optimal production frontier (Cyert and March 1963; Liebenstein 1976). Slack resources can contribute to an organizations implementation capability. Implementation capability is related to the concept of an organizations ,,readiness for change, which can be assessed with a number of qualitative or quantitative instruments (Gustafson et al.2003; Hamilton et al.2007; Ovretveit et al. 2007; Weiner et al. 2008; Stetler et al. 2009). Assessments of implementation capability should answer such questions as: What organizational resources (e.g., staff, technology, or expertise) would be necessary for change? Are the needed resources available? Are there any slack resources? Do staff perceive a need for change and are they motivated to change? Do staff perceive themselves to be capable of implementing change? Is there senior management and stakeholder support for the change? Answering these questions will require both quantitative and qualitative data collection, which could be accomplished by actors who are internal or external to the organization. Assessing Extent of Strategies Already in Progress An organizations previous and ongoing efforts to implement performance improvement strategies should also be assessed as these experiences can influence the likely effectiveness of future strategies. This assessment has two stages: first, documenting existing performance improvement activities and second, evaluating the effectiveness of existing activities. The objective of the first stage of the assessment is to determine the extent to which key elements in each of the seven strategy areas (Table 6) have already been undertaken or are currently in progress. Questions for this first stage should enable organizational managers and external evaluators to systematically map where the organization is within each strategy area (Table 8). In the second stage of the assessment, the objective is to determine whether those strategy elements already in progress are effective in the particular organizational context. There are multiple methods for determining if a given strategy is producing the desired effect, which are summarized in Section IV.E. However, presenting specific measures of effectiveness for each of the seven strategy areas is beyond the scope of this guide. Both stages of documentation and evaluation are important for assessing the extent of strategies already in progress. Once assessed, existing initiatives should inform the selection of future strategies by providing a positive foundation upon which future strategies can build or lessons learned about errors to avoid. This assessment can also reveal gaps where key elements of otherwise sound strategies have yet to be implemented; these gaps may represent ,,low-hanging fruit for improving organizational performance. 31 Table 8: Examples of Questions for Assessing Extent of Strategies Already in Progress STRATEGY AREA ILLUSTRATIVE ASSESSMENT QUESTIONS Has a review of facility operating processes been conducted to identify those processes that can be standardized? Standards and Have standard operating procedures been developed for those processes identified as appropriate for standardization? guidelines Have staff been trained on existing standards and guidelines? Is adherence to existing standards and guidelines part of staff performance criteria? Does the organization have a functional or cross-functional (e.g. division or matrix) structure? Has the organizations structure been reviewed and validated in light of the organizations size, staff capacity, and performance objectives? Organizational Are lines of reporting for staff clearly established and understood by both staff and managers? design Is a process in place to assess whether managers have sufficient time and capacity to effectively supervise their direct reports, and to adjust manager span of control accordingly? Has managerial and staff authority been formally aligned with each roles assigned responsibilities? Is there a mechanism in place at the organization level to adjust authority or responsibility of roles when needed? Have staff received pre-service training that equips them with required competencies for their roles? Does staff pre-service training include explicit activities and approaches to socialize staff into norms of professionalism? Education and training Is a system in place to identify competency-based knowledge and skills gaps of current staff, and to fill these gaps through in-service training? Do staff have the opportunity to access new technical knowledge and skills in their field through information resources (print or electronic) and learning events? 32 Table 8: Examples of Questions for Assessing Extent of Strategies Already in Progress (continued) STRATEGY AREA ILLUSTRATIVE ASSESSMENT QUESTIONS Process Are process indicators currently defined and measured? improvement Is a system in place to routinely monitor process indicators and identify areas for improvement? and technology Have needed tools, equipment, and materials for process improvement been obtained when needed? and tool Are procedures in place for testing and evaluating new processes and technologies? development Has the organization borrowed solutions from other organizations? Have individual or organization-level performance objectives been defined? Incentives Has staff input been solicited and used to identify relevant incentives related to the objective? (monetary or Has staff authority been aligned with the level required for the behavior targeted by the incentive? non-monetary) Is a performance monitoring system in place to track eligibility for incentive? Is the incentive scheme operational and understood by the staff and managers involved? Have staff and management been surveyed on their attitudes towards and beliefs about the organization and its work? Has an assessment been conducted to identify the formal and informal structures, processes, group dynamics, and communication patterns that contribute to staff attitudes and beliefs? Organizational culture Has a vision for the organization and its objectives been developed with input from all staff? Has a participatory decision making process been conducted to determine changes that would contribute to an internal culture congruent with the organizations vision and objectives? Have the identified changes to structures, processes, groups, and communication patterns been implemented? Have leadership and management roles been established within the health facility? Leadership Have staff responsibilities and authority been aligned with these leadership and management roles? and management Have leaders and managers been given the necessary autonomy and authority to develop and to achieve the organizational mission? Are systems in place to develop problem solving skills among managers and staff at the facility level? 33 Identifying Positive Deviants and Proven Strategies One important approach to identifying innovative and potentially effective strategies is known as 'positive deviance' (Marsh et al. 2004). 'Positive deviants' in health care are organizations that consistently demonstrate exceptionally high performance in an area of interest. The central premise of a positive deviance approach (Sternin and Choo 2000; Marsh et al. 2004; Bradley et al. 2009) is that solutions to problems that face a community often exist within that community, and that certain members possess wisdom that can be generalized to improve the performance of other members. In the case of organizational performance, the ,,community refers to a given group of health care delivery organizations. Many of the strategies used by positive deviants rely on resources that already exist in the local environment, which can increase their adoption and sustained use (Walker et al. 2007). The power of a positive deviance approach to improve health outcomes has been shown in complex problems globally including pregnancy outcomes (Ahrari et al. 2002), condom use (Positive Deviance Initiative), and childhood nutrition (Sternin et al. 1999; Marsh et al. 2002; Marsh et al. 2004). Importantly, the positive deviance approach allows for the explicit integration of real-life implementation issues and organizational context because it seeks to characterize not just what processes and practices are present in top performing organizations but also the context (e.g., organizational culture, leadership support, norms of behavior) in which they are implemented. Although the replication of best practices requires sensitivity to the unique context of the adopting organization (Emery and Trist 1965; Susman 1983; Van de Ven 1995; Berta and Baker 2004; Greenhalgh et al. 2004; Auerbach et al. 2007; Yuan et al. 2010), the positive deviance approach characterizes important contextual factors as part of the description of how top performers achieved their success. Criteria for Selecting Strategies Strategy selection involves development of several strategy options followed by systematic comparison of those options using defined criteria. Strategy options should be developed to address the root causes of performance gaps; Table 7 provides a starting point by mapping strategy categories to the root causes they target. Following assessment of environmental factors and implementation capability, strategy options should be narrowed based on whether their conditions for effectiveness (Table 6) are present in the organizations internal and external environments. The remaining strategy options should be compared and ranked based on criteria developed and agreed upon by the stakeholders involved in the performance improvement effort. A system for weighting the criteria and aggregating scores across criteria should be established before the strategies are compared and rated. Recommended criteria include: the degree of political feasibility and community support for the strategy, the strategys cost and affordability, the time required for implementation, whether the strategy is likely to be effective based on the best empirical evidence, and whether the strategy addresses the priorities of key stakeholders. The process of rating strategy options can be formal or informal, qualitative or quantitative, written or oral, anonymous or public. The appropriate method will depend on such factors as the number of stakeholders involved, the distribution of power among stakeholder groups, the time available for the strategy selection process, and cultural norms of communication within the organization and between the organization and its stakeholders. 34 IV.E. MONITORING PROGRESS IN PERFORMANCE IMPROVEMENT INTERVENTIONS In order to be able to measure effect of a given strategy on performance, several preconditions must be present: established performance targets, adequate data infrastructure and capacity, facility-level and government management support, defined intermediate indicators, and a feasible and appropriate study design. First, specific, measurable performance targets (e.g., improving patient satisfaction ratings by 20% in 12 months) should be clearly defined at the outset. The a priori setting of targets can be guided by information from a range of sources, including relevant empirical literature, views of those most closely involved with the performance domain, the history and current performance status of the facility, performance of competitors. Targets should be clear, visible, and known by all in the facility. Second, a functional data infrastructure must be in place at the inception of strategy implementation. One essential consideration is whether data elements can be integrated into existing systems or will require investment in new systems. Procedures and resources must be allocated in or to implement either approach. Another consideration is whether the data should be managed, analyzed and reported through self-monitoring by the facility or by an external third-party. Last, it is critical to determine in advance precisely how the data will be used and by whom. The particular needs of end-users must be addressed in the definition, analysis and reporting of data in order to ensure maximum relevance and utility. These needs will likely vary by constituent group (e.g., facility managers, Ministry of Health officials, and decision makers outside the health system such as the Ministry of Finance). Third, support at the facility-level and the governmental levels for the performance improvement objectives must be fully present, and reflected in provision of necessary resources to accomplish the effort, including any requisite investment in data infrastructure. Management must also be flexible for mid-course corrections to the strategy due to changes in the environment or discovery of a misalignment between strategy choice and facility capacity and/or environmental conditions. This is often a goal of participatory models of program evaluation (Guba and Lincoln 1989; Aaker and Shumaker 1994; Freeman 1994; CDC 1999; Patton 2002), including realist models (Pawson and Tilley 1997) and is desirable as long as fidelity to and adaptation of the original intervention is documented. Management support should include valuing and facilitating feedback and learning mechanisms. This would include creating opportunities and structures at the facility level for reflection on data and learning by stakeholders involved in improvement efforts. This reflection would allow for timely assessment of what is working or not working, as well as consideration of adjustments that can be made to support achievement of the performance objective. Fourth, data infrastructure capacity and resources must not only support assessment of endpoints, but also intermediate process indicators. Midpoint process indicators are necessary for documenting whether the strategy is being implemented and monitoring progress toward objectives. Such indicators should be specific, measurable, and aligned with the ultimate performance objective. 35 Last, the method chosen to determine the effects of a given strategy on performance must be carefully selected to be feasible, appropriate, and responsive to the information needs of decision makers. The spectrum of possible study designs includes pre/post or time series intervention in a single organization, pre/post intervention with a comparison organization/s, and randomized controlled trials (RCTs). Each design has strengths and limitations and the selection should be based on the type of evidence required by decision makers. The nature of evidence and types of inference generated through each of these designs varies, and can be classified as adequacy (demonstration of expected changes in behaviors, health services or health status), plausibility (demonstration that the strategy is likely effective) and probability (proof that the strategy is efficacious or effective) (Habicht et al. 1999; Peters et al. 2009). Meeting the probability level of inference requires RCTs. Although RCTs are perceived as the gold standard for evaluating the efficacy of an intervention, the RCT design is often not suitable for studying organizational performance for several reasons. First, rigorous RCTs require holding constant all potentially influential variables. Because it is difficult if not impossible to hold constant certain organizational features (e.g., leadership, learning culture) or the broader environment (e.g., regulatory and payment systems or the political and social context), both of which can influence performance improvement strategies, observed outcomes cannot be attributed exclusively to the intervention. Second, controlling these contextual factors in order to preserve the integrity of an intervention limits understanding of the role of such factors in performance improvement and fails to provide insights into how the intervention works in the 'real world' or why the strategy may have succeeded or failed. Third, spillover effects of interventions across health care delivery organizations is commonplace, as physicians are in professional and social networks that might bring them in contact even if they are organizationally distinct. For instance, individuals in the intervention arm may share information through social networks, potentially contaminating behavior in control arm facilities and attenuating observable effects. Importantly, this method of diffusion of innovations can in fact have powerful effects on performance across the larger system, and should not be stifled for the purpose of conducting an RCT; in fact, understanding the mechanisms for such diffusion is an area in need of further study. 36 V. CONCLUSION: USING THEORY TO INFORM PRACTICE This guide presents frameworks and principles for defining, measuring, and improving health service organizational performance. As the empirical results reported in Appendix 1 illustrate, these theoretical approaches have been applied in highly diverse ways to performance measurement in the field. No single set of metrics or methods will be perfectly adapted to every organizational context. The challenge for policy makers and organizational managers is therefore to determine which performance measures and improvement strategies are appropriate to a particular context. The conceptual and methodological approaches presented in this guide offer direction for tailoring performance criteria and interventions to organization-level realities. We close with a few recommendations for using theory to inform practice. Recommendation 1: Create interdisciplinary teams for organizational performance assessment and improvement. In health services research and interventions, interdisciplinary teams should include individuals with backgrounds in clinical medicine, public health, and diverse social sciences such as law, economics, psychology, sociology, and management. Using interdisciplinary teams can facilitate holistic evaluations of organizational performance and can stimulate consideration of a broader range of improvement strategies. Convening and effectively managing interdisciplinary teams requires particular skills; team facilitators need to invest time and effort in establishing norms of mutual respect and cross-disciplinary communication and learning within the team. However, when such teams work effectively, the benefits from applying diverse theoretical approaches to a common problem can be substantial. Recommendation 2: Apply analytical frameworks from OB/OT and organizational psychology early in the process of designing performance improvement interventions. Organizational behavior and organizational theory are valuable in informing strategy design. Understanding the interpersonal and inter-organizational dynamics operating in a given hospital or health clinic should be a first step in any development of performance improvement programs. OB and OT frameworks provide guidance for thinking through key considerations in understanding an organizations internal and external relationships. These relationships should shape strategy content from the outset rather than entering the performance improvement process at the final stage of making the strategy function in practice. Recommendation 3: Determine the optimal scientific method for understanding the organizational performance issue in question. Some elements of organizational performance are amenable to quantitative measurement approaches while others require qualitative data collection or mixed methods. Questions of whether the organization is performing to standard may be well answered by quantitative indicators; however, questions of why the organization is or is not performing will usually require some degree of qualitative data collection to answer. Acquiring fluency in both quantitative and qualitative methods is important for both health services researchers and decision makers; in particular, government officials and organizational managers may need coaching on how to use and evaluate the results of qualitative studies, which may be relatively less familiar. Recommendation 4: Tailor selection of performance improvement strategies to each organization's external environment and internal implementation capabilities. There are no universal prescriptions for improving health service delivery organizational performance; 37 interventions must be tailored to each organization. Developing strategies at the organizational level can be challenging for many reasons, including a lack of data, the absence of managers within the facility to guide change, and insufficient capacity to execute the strategy development process at each organization. However, the primary challenge to tailoring strategies to each organizational context is often the desire of health system policy makers to create generic system-wide solutions. Such solutions can have important benefits, such as economies of scale, but they can also fail repeatedly due to inattention to the microenvironments within each health service delivery organization. Many system level interventions are valuable in creating enabling environments for organizational performance, as discussed in Section II. These environmental conditions should inform the choice of strategy and need to be balanced by consideration of organization-level root cause analysis and implementation capability. In sum, measuring and improving organizational performance is complex because organizations are diverse and dynamic. Analysis and intervention should happen at the levels of the environment, health system, and health facility, using insights from multiple disciplines. Users of this guide should take away a toolkit of concepts and methods that can help them identify which questions to ask and how to answer them in the context of defining, measuring, and improving health service delivery performance. Having this broad set of tools with which to understand and enhance organizational performance can contribute to improving health service delivery and ultimately final health outcomes. 38 APPENDIX 1: EMPIRICAL LITERATURE REVIEW METHODOLOGY The literature review was conducted using PubMed, a database of medical and scientific literature maintained by the U.S. National Library of Medicine. This database was selected as it is the most comprehensive database indexing relevant medical and health services journals. We retrieved an initial set of 2,371 peer-reviewed articles using the following search parameters: Geography: All countries that Figure A1.1: Literature Review Sample Derivation were eligible for World Bank loans in 2009 (World Bank 2009); Medical Subject Headings INITIAL PUBMED (MeSH): Health Facilities AND SEARCH (Health Services Administration OR Health Quality, Access and Evaluation); all subject headings were ,,exploded to capture articles STAGE 1 SCREENING categorized under subheadings; Review of Abstract Publication dates: 2005-2010; N=2,371 articles Language: English; and Study type: Human. We used a two-stage screening process to EXCLUDED AT STAGE 1 derive the final sample (Figure A1.1). 1,686 articles First, we reviewed the abstracts of all 2,371 articles retrieved in the initial search. An article was excluded in Stage 1 if it: STAGE 2 SCREENING Review of Full Text did not mention any of the six N=685 articles performance domains in our taxonomy; was limited to clinical research rather than the care delivery EXCLUDED AT STAGE 2 process; or 504 articles focused on macro-level health systems rather than facility-level health services. Reason for Exclusion No relevant metrics: 248 articles (49%) Second, we reviewed the full text of the Out of scope: 172 articles (34%) 685 articles that had passed the first-stage No full text available: 45 articles (9%) Not empirical study: 39 articles (8%) screening. At this second stage, an article was excluded if: 39 FINAL SAMPLE N=181 articles APPENDIX 1: EMPIRICAL LITERATURE REVIEW no specific measures were provided for any of the six intermediate outcome domains; Stage 1 exclusions were discovered during the full text review; the full text of the article was not available; or the article was not an original research study. Following this two-stage screening, 181 articles were retained in the final sample for analysis. For each article in the final sample, we identified the metrics used to measure any of the six organizational intermediate outcomes. We then catalogued the metrics used in each article by domain, dimension, and sub-dimension, and recorded the measurement methods used. Finally, we noted if the metric was related to a specific unit in the health facility (for example, laboratory or emergency room), a specific health issue (for example, HIV/AIDS, reproductive health), or a cross-cutting theme (for example, information technology or safety). We used Microsoft Office Excel (2007) to record the data and to perform relevant descriptive analysis. RESULTS We summarized the number and percentage of articles by: 1) Domains and dimensions (Table A1.1), 2) Measurement method (Table A1.2), 3) World Bank geographic region (Table A1.3), 4) Area of health service provision (Table A1.4), 5) Health facility unit (Table A1.5), and 6) Cross-cutting theme (Table A1.6). These six tables are presented in Appendices 2-7 with the corresponding article references for each category of metrics. The 181 articles in the final sample are listed by reference code in Appendix 8. Using Appendices 2-8, one can locate the subset of articles that address a particular domain, dimension, sub-dimension, measurement method, region, health service, health facility unit, or cross-cutting theme. We recommend referencing the original articles to obtain the full set of metrics used and to understand the details of the methodology as applied in the study context and reported in the article. 40 APPENDIX 1: EMPIRICAL LITERATURE REVIEW Studies by Performance Intermediate Outcome Domain and Dimension Table A1.1: Frequency Distribution of Articles by Domain and Dimension Number of Articles Domains and Dimensions (Percentage of Sample) Quality 150 (83%) Clinical Quality 72 Management Quality 47 Patient/family Satisfaction 31 Efficiency 18 (10%) Cost-to-service ratios 13 Patient or procedure volume per time period 1 Staff-to-service ratios 4 Utilization 31 (17%) Patient or procedure volume ­ general 12 Patient or procedure volume relative to capacity 8 Patient or procedure volume relative to population 1 Patient or procedure volume relative to population health characteristics 6 Patient or procedure volume relative to the need of the patient 3 Service usage relative to income group 1 Access 36 (20%) Financial Access 10 Information Access 4 Linguistic Access 1 Physical access 10 Service availability/allocation 11 Learning 18 (10%) Use of data audit and feedback processes 8 Innovation adoption 2 Training and continuing education for workforce 8 Sustainability 17 (9%) Commitment of staff 14 Community support 2 Strategic planning 1 TOTAL NUMBER OF ARTICLES 181* *Percentages add to more than 100% as some articles had metrics in several domains or dimensions. 41 APPENDIX 1: EMPIRICAL LITERATURE REVIEW Studies by Measurement Method The most common measurement method used in the sample was the review of hospital records (53.6%) (Table A1.2). Provider (16%) and patient surveys (12.2%) were the next most common methods used. Review of hospital records was used to measure performance in each of the six domains. This extensive use of records may reflect the dynamics and constraints of applied research in health facilities. Data collection via hospital records is less disruptive to providers or patients, and often less resource-intensive than primary data collection, and therefore more likely to be acceptable to health facility managers. The frequent use of records may also reflect a predominance of researchers with expertise in quantitative methods and an underdeveloped recognition of the potential contributions of qualitative methods in understanding certain dimensions of organizational performance. Table A1.2: Frequency Distribution of Articles by Measurement Methods Used Measurement Method Number of Articles (Percentage of Sample) Hospital records review 97 (53.6%) Provider survey 29 (16.0%) Patient survey 22 (12.2%) Observational assessment 15 (8.3%) Patient exit surveys 8 (4.4%) Patient interview 7 (3.9%) Household survey 5 (2.8%) Provider performance in case-simulation 5 (2.8%) Provider interviews 4 (2.2%) Vignettes 2 (1.1%) Patient focus groups 2 (1.1%) Community member interviews 1 (0.6%) Data collection through simulated patients 1 (0.6%) Community focus groups 1 (0.6%) Patient exit interviews 1 (0.6%) Provider focus group 1 (0.6%) Review of anonymous providers' self-reports 1 (0.6%) TOTAL NUMBER OF ARTICLES 181 *Percentages add to more than 100% as some studies used multiple measurement methods. 42 APPENDIX 1: EMPIRICAL LITERATURE REVIEW Studies by Geographic Region of World Bank Client Countries The largest percentage (34. 3%) of the articles is studies conducted in sub-Saharan Africa, which is followed by the Middle East and North Africa (18.2%) and East Asia and the Pacific (17.1%) (Table A1.3). Europe and Central Asia had the fewest articles in the sample (2.2%). Table A1.3: Frequency Distribution of Articles by World Bank Geographic Region Region Number of Articles (Percentage of Sample) Sub-Saharan Africa 62 (34.3%) Middle East and North Africa 33 (18.2%) East Asia and Pacific 31 (17.1%) South Asia 28 (15.5%) Latin America and Caribbean 22 (12.2%) Europe and Central Asia 4 (2.2%) Multiple 1 (0.6%) TOTAL NUMBER OF ARTICLES 181 Studies by Type of Health Service, Health Facility Unit, and Cross-Cutting Themes Among the 43 articles that focused on a targeted health service area, 27 (62.8%) focused primarily on womens health services, such as maternal care, delivery, and family planning services provided by the health facility (Table A1.4). A total of 20 articles (46.5%) that focused on services involving proper care of newborns and children. Only a small number of the articles focused on organizational services specifically for HIV/AIDS, TB or malaria. Table A1.4: Frequency Distribution of Articles by Type of Health Service Number of Articles Health Service Area (Percentage of Total) Womens health services 27 (62.8%) Child health services 20 (46.5%) HIV/AIDS services 4 (9.3%) TB services 4 (9.3%) Malaria services 2 (4.7%) TOTAL NUMBER OF ARTICLES 43 *Percentages add to more than 100% as some studies addressed multiple health service areas. Among the 56 articles that focused on a targeted unit or department in the health facility, 48.2% involved indicators for pharmacy operations and 28.6% involved indicators for laboratory services (Table A1.5). Articles pertaining to outpatient, emergency, intensive care unit, surgery, and registration constituted less than 10% each of the 56 articles. 43 APPENDIX 1: EMPIRICAL LITERATURE REVIEW Table A1.5: Frequency Distribution of Articles by Health Facility Department / Unit Number of Articles (Percentage of Department / Unit Total) Pharmacy 27 (48.2%) Lab 16 (28.6%) Inpatient 11 (19.6%) Outpatient 5 (8.9%) Intensive Care 5 (8.9%) Emergency 5 (8.9%) Surgery 4 (7.1%) Registration 3 (5.4%) TOTAL NUMBER OF ARTICLES 56 *Percentages add to more than 100% as some studies addressed multiple health facility units. Among the 92 articles that focused on cross-cutting issues, 42 were directly associated with providers (health care workers and staff) (Table A1.6). These articles measured attributes such as providers competence levels, satisfaction with work, and interaction with patients. Only a few articles measured performance indicators related to the community, such as community participation. TABLE A1.6: Frequency Distribution of Articles by Cross-Cutting Theme Number of Articles (Percentage of Cross-Cutting Theme Total) Providers 42 (45.7%) Management 29 (31.5%) Patients 19 (20.7%) Sanitation 18 (19.6%) Information Systems 15 (16.3%) Safety 15 (16.3%) Community 3 (3.3%) TOTAL NUMBER OF ARTICLES 92 *Percentages add to more than 100% as some studies addressed multiple themes. Instructions for Looking Up Article References Each of the articles in the final sample has an identification code (Article ID) beginning with the letter ,,R. Articles are listed in order of their Article IDs in Appendix 8. To find a given article listed in Appendices 2-7, note its Article ID and then look up that identification number in Appendix 8. 44 APPENDIX 2: ARTICLE REFERENCES BY DOMAIN, DIMENSION & SUB-DIMENSION OF PERFORMANCE (Domains, dimensions, and sub-dimensions are listed alphabetically. Article IDs refer to citation list in Appendix 7.) Domain Dimension Sub-Dimension Article IDs R15, R21, R63, R100, R104, R122, R134, R142, Ability to pay for services Financial access R168 Opportunity cost R121 Knowledge about service provision R21, R34, R79 Information access Pre-conceived perception of health R134 facility Linguistic access Language barrier R104 R15, R21, R75, R88, R100, R104, R121, R134, Geographical constraints R168 Healthcare seeking behavior R39 Physical access Availability of family members to take R21 Access patient to facility Patient's health condition prevents R21 access Clinical services provision R168 Conflict in timing to access health R104 facility Service Health workers availability/allocation R32, R34, R41, R106, R112, R168 availability/ Infrastructure availability/allocation R106 allocation Medical supplies availability/allocation R100, R106, R131, R164 Medicines availability/allocation R28, R100, R131 Referral pattern R106 Types of available services R34 Cost-to-service Cost-to-service ratios R40, R135 Efficiency ratios DALYs averted by service R64, R65 45 APPENDIX 2: ARTICLE REFERENCES BY DOMAIN, DIMENSION & SUB-DIMENSION OF PERFORMANCE Domain Dimension Sub-Dimension Article IDs Data Envelopment Analysis R97, R109, R146, R148 Delivery and utilization factors R126 compared to cost per delivery Mathematical model R11, R59 TB services R87 Two cost scenarios method R167 Patient or procedure volume Procedure per time R77 per time period Staff-to-service Data Envelopment Analysis R77 ratios Consumers' use of hospital performance R89 information Use of data audit Error reporting R80 and feedback Feedback from community R5, R125, R152 processes Feedback from patients R49, R114 Feedback from providers R149 Learning Innovation Information system utilization R22, R93 adoption Learning organization scale R86 Training and Provider formal training R49, R106, R128, R150 continuing Provider's compliance with guidelines R6 education for workforce Quality assurance mechanisms R120 Staff participation in meetings R42 Adverse drug reactions (ADRs) R12, R19, R35, R137 Clinical outcomes R11, R20, R47, R54, R87, R111, R112 Quality Clinical quality Consultation and counseling quality R13, R53, R63, R98, R123, R143, R171 Follow-up/continuity mechanisms R75, R83, R171 46 APPENDIX 2: ARTICLE REFERENCES BY DOMAIN, DIMENSION & SUB-DIMENSION OF PERFORMANCE Domain Dimension Sub-Dimension Article IDs Guidelines availability and use R106 Infection prevention and control and R5, R46, R51, R60, R155 waste management Information systems quality R5, R55, R147 Laboratory services quality R1, R113, R117, R157 R10, R62, R69, R84, R103, R118, R150, R154, Medicine prescribing quality R156, R161 Non-prescribing medical errors R23 Patient provider interaction quality R63, R67, R122, R127, R143 Patient/provider safety R67, R178, R181 Patient's flexibility to make a decision R171 Physical resource management - R122 pharmaceutical supplies R15, R25, R26, R63, R66, R74, R82, R131, R133, Provider technical competence R136, R138, R150, R164, R171 Provider's compliance with guidelines R16, R85, R99, R131, R167 Quality assurance mechanisms R107 Quality of intensive care unit service R72 Quality of laboratory services R170 Quality of pharmacy R29 Quality of reproductive health services R116, R130, R162 Readmissions R11 TB service quality R45, R78 Appropriate constellation of services R171 Clinical services provision R164 Management Quality Environmental factors disturbing care R102 quality Financial management R25, R67, R74 Governance quality - leadership R25, R49, R141 47 APPENDIX 2: ARTICLE REFERENCES BY DOMAIN, DIMENSION & SUB-DIMENSION OF PERFORMANCE Domain Dimension Sub-Dimension Article IDs Human resource management quality R25, R67, R74, R141 Infection prevention and control and R2, R7, R8, R18, R25, R38, R48, R56, R61, R74, waste management R119, R124, R129, R141 R25, R34, R74, R141, R144, R151, R174, R175, Information systems quality R180 Infrastructure availability/allocation R47, R112, R141 Laboratory services quality R111 Manager competence R139 Medicine dispensing quality R9, R36, R62 Medicines availability/allocation R168 Patient flow/wait time quality R25, R27, R104, R123, R132, R143, R168 R15, R34, R67, R75, R105, R132, R145, R150, Patient-provider interaction quality R165 Physical resource management - general R47, R66, R112 Physical resource management - non- R112 pharmaceutical medical supplies Physical resource management - non- R66, R105, R112, R128 pharmaceutical non-medical supplies Physical resource management - R25, R49, R74, R106 pharmaceutical supplies R25, R34, R74, R141, R144, R151, R174, R175, Quality assurance mechanisms R180 Pathophysiological factors disturbing R102 care Patient/family Patient-provider interaction quality R15, R45, R104, R163, R169, R171 Quality satisfaction Satisfaction with clinical and R5, R14, R21, R33, R39, R58, R88, R91, R110, management quality R138, R179 Satisfaction with clinical quality R15, R42, R49, R81, R100, R122, R134, R153, 48 APPENDIX 2: ARTICLE REFERENCES BY DOMAIN, DIMENSION & SUB-DIMENSION OF PERFORMANCE Domain Dimension Sub-Dimension Article IDs R168, R176 Satisfaction with costs of service R138 Satisfaction with management quality R3, R49, R123, R131, R159, R163, R168, R169 TB service quality R45 Commitment to organization R86 Provider satisfaction with work R5, R33, R68, R80, R86, R92, R125, R175 Provider's perception of safety R80 Commitment of Staff satisfaction with work R34 staff Staff support and motivation R45, R94 Sustainability Work climate R50 Work-related stress R37, R52 Community Community capacity building R96 support Community participation in planning R34 Responsiveness to environmental Strategic planning R57 factors Admission rate R47 Patient or Choice of service R17, R26, R100, R166, R177 procedure volume - Diagnostic imaging usage R160 general Service usage R31, R43, R67, R172 Usage of reproductive health services R158 Emergency department utilization R159 Utilization Intensive care unit utilization R71 Patient or Inpatient utilization R173 procedure volume Patient volume relative to capacity R4 relative to capacity Patient-to-staff ratio R135 Reproductive health services use R147 Service usage R156 49 APPENDIX 2: ARTICLE REFERENCES BY DOMAIN, DIMENSION & SUB-DIMENSION OF PERFORMANCE Domain Dimension Sub-Dimension Article IDs Patient or procedure volume Referral pattern R24 relative to population Patient or Emergency department utilization R76 procedure volume Inpatient utilization R73, R76 relative to Outpatient utilization R73, R76 population health characteristics Reproductive health services use R30 Patient or Appropriateness of utilization of service R44 procedure volume Laboratory service usage R115 relative to patient need Reproductive health services usage R90 Service usage relative to income Inpatient utilization R95 group 50 APPENDIX 3: ARTICLE REFERENCES BY MEASUREMENT METHOD (Article reference IDs refer to citation list in Appendix 7.) Measurement Methods Article IDs for Studies using Method Community member R152 interviews Data collection through R143 simulated patients Focus groups R42 Focus groups with community R122 R4, R5, R6, R7, R9, R10, R11, R18, R19, R20, R21, R22, R23, R25, R26, R27, R28, R30, R31, R32, R35, R36, R38, R40, R41, R44, R46, R47, R48, R50, R51, R55, R57, R62, R64, R65, R69, R70, R71, R72, R73, R74, R75, R76, R78, R82, R83, R84, R87, R90, R95, R97, R99, R101, R103, R105, R107, Hospital records review R108, R109, R111, R112, R113, R115, R118, R120, R126, R128, R132, R135, R137, R140, R144, R145, R146, R147, R148, R150, R151, R155, R156, R158, R159, R160, R161, R162, R164, R167, R170, R171, R172, R173, R174, R175, R178, R179, R180, R181 Household survey R17, R39, R43, R54, R177 In-depth interviews R16 R13, R18, R19, R24, R42, R56, R60, R61, R66, R82, R110, Observational assessment R129, R141, R150, R165 Patient exit interviews R33 Patient exit surveys R3, R5, R15, R58, R98, R123, R138, R159 Patient focus groups R16 Patient interview R34, R116, R121, R134, R142, R163, R171, R12, R14, R45, R53, R62, R81, R88, R89, R91, R100, R102, Patient survey R104, R114, R117, R124, R127, R130, R153, R154, R168, R169, R176 Provider focus group R33 Provider interviews R23, R67, R79, R152 R1, R2, R8, R18, R29, R33, R37, R45, R49, R52, R59, R63, Provider performance in case- R66, R68, R77, R80, R85, R86, R92, R93, R94, R133, R136, simulation R157 Provider survey R106, R119, R125, R128, R131, R149, R166, R175 Review of anonymous R23 providers' self-reports Vignettes R138, R164 51 APPENDIX 4: ARTICLE REFERENCES BY WORLD BANK GEOGRAPHIC REGION (Article reference IDs refer to citation list in Appendix 7.) Region Article IDs for Studies Conducted in the Region R10, R32, R35, R46, R64, R69, R77, R80, R85, R86, R87, R88, R89, R93, R95, R96, R101, R102, R113, R127, R129, East Asia and Pacific R135, R138, R143, R146, R155, R164, R165, R178, R180, R181 Europe and Central Asia R20, R47, R130, R140 R4, R9, R11, R22, R23, R30, R36, R38, R42, R56, R58, R60, Latin America and Caribbean R61, R63, R90, R103, R116, R118, R121, R134, R137, R15 R1, R2, R5, R7, R14, R18, R19, R44, R48, R49, R50, R51, R52, R53, R54, R55, R68, R70, R75, R76, R97, R114, R119, Middle East and North Africa R120, R133, R144, R150, R153, R154, R160, R169, R171, R179 Multiple R67 R3, R8, R12, R29, R31, R33, R39, R40, R59, R62, R81, R83, South Asia R84, R92, R136, R141, R142, R145, R151, R156, R157, R161, R162, R163, R166, R168, R170, R173 R6, R13, R15, R16, R17, R21, R24, R25, R26, R27, R28, R34, R37, R41, R43, R45, R57, R65, R66, R71, R72, R73, R74, R78, R79, R82, R91, R94, R98, R99, R100, R104, R105, Sub-Saharan Africa R106, R107, R108, R109, R110, R111, R112, R115, R117, R122, R123, R124, R125, R126, R128, R131, R132, R139, R147, R148, R149, R152, R159, R167, R172, R174, R175, R176, R177 52 APPENDIX 5: ARTICLE REFERENCES BY TYPE OF HEALTH SERVICE (Article reference IDs refer to citation list in Appendix 7.) Women's Child HIV TB Malaria Health Health R3 R3 R13 R45 R28 R4 R4 R40 R78 R150 R17 R17 R91 R87 R20 R46 R110 R121 R26 R47 R30 R67 R53 R98 R54 R100 R58 R103 R63 R126 R67 R131 R75 R138 R90 R141 R98 R142 R100 R147 R112 R158 R116 R162 R126 R166 R130 R171 R134 R172 R141 R142 R147 R158 R162 R166 R171 53 APPENDIX 6: ARTICLE REFERENCES BY HEALTH FACILITY DEPARTMENT / UNIT (Article reference IDs refer to citation list in Appendix 7.) Registration/ Intensive Labs Pharmacy Outpatient Inpatient Surgery Emergency Triage Care R27 R1 R9 R73 R11 R23 R60 R76 R123 R22 R12 R76 R73 R32 R64 R106 R168 R23 R13 R123 R76 R71 R111 R131 R82 R19 R138 R95 R72 R180 R133 R99 R21 R168 R102 R103 R159 R111 R23 R123 R113 R28 R138 R115 R29 R153 R117 R31 R161 R123 R33 R168 R131 R35 R173 R136 R36 R150 R62 R157 R84 R160 R91 R170 R99 R100 R103 R118 R122 R123 R131 R150 R154 R161 R164 R168 54 APPENDIX 7: ARTICLE REFERENCES BY CROSS-CUTTING THEME (Article reference IDs refer to citation list in Appendix 7.) Info. Systems Sanitation Safety Provider Patient Community Management R25 R2 R2 R5 R3 R34 R10 R27 R5 R7 R6 R15 R96 R15 R34 R7 R8 R8 R42 R152 R25 R49 R8 R25 R14 R44 R27 R55 R14 R36 R15 R49 R28 R74 R18 R38 R25 R67 R34 R77 R25 R67 R32 R81 R36 R82 R38 R69 R33 R104 R38 R93 R48 R74 R34 R114 R49 R101 R60 R119 R37 R122 R57 R114 R61 R129 R39 R123 R66 R125 R74 R137 R41 R124 R67 R174 R80 R155 R42 R127 R74 R175 R119 R178 R49 R134 R79 R180 R124 R181 R50 R153 R104 R129 R51 R159 R105 R131 R52 R163 R107 R168 R67 R176 R112 R68 R179 R123 R74 R126 R80 R132 R86 R139 R88 R143 R92 R144 R100 R145 R102 R163 R106 R165 R112 R167 R120 R180 R122 R125 R127 R128 R133 R138 R152 R153 R163 R164 R167 R168 R175 55 APPENDIX 8: BIBLIOGRAPHIC INFORMATION FOR ARTICLE REFERENCES ID Article Citation Abbassi, M., Rahbar, M., Hekmat Yazdi, S., Rashed Marandi, F., Sabourian, R., & Saremi, M. (2006). Evaluation of the R1 10th external quality assessment scheme results in clinical microbiology laboratories in Tehran and districts. Eastern Mediterranean Health Journal, 12(3-4), 310-315. Abdulla, F., Abu Qdais, H., & Rabi, A. (2008). Site investigation on medical waste management practices in northern R2 Jordan. Waste Management, 28(2), 450-458. Agha, S., Karim, A. M., Balal, A., & Sosler, S. (2007). The impact of a reproductive health franchise on client satisfaction R3 in rural Nepal. Health Policy & Planning, 22(5), 320-328. Aguilera, N., & Marrufo, G. M. (2007). Can better infrastructure and quality reduce hospital infant mortality rates in R4 Mexico? Health Policy, 80(2), 239-252. Al Tehewy, M., Salem, B., Habil, I., & El Okda, S. (2009). Evaluation of accreditation program in non-governmental R5 organizations' health units in Egypt: Short-term outcomes. International Journal for Quality in Health Care, 21(3), 183-189. Allen, C. W., & Jeffery, H. (2006). Implementation and evaluation of a neonatal educational program in rural Nepal. R6 Journal of Tropical Pediatrics, 52(3), 218-222. Al-Shahwani, M. F. (2005). Bacterial distribution analysis of the atmosphere of two hospitals in Ibb, Yemen. Eastern R7 Mediterranean Health Journal, 11(5-6), 1115-1119. Amanullah, A. S., & Uddin, J. (2008). Dynamics of health behavior regarding hospital waste management in Dhaka, R8 Bangladesh: A dysfunctional health belief model. International Quarterly of Community Health Education, 29(4), 363-380. Anacleto, T. A., Perini, E., Rosa, M. B., & Cesar, C. C. (2007). Drug-dispensing errors in the hospital pharmacy. Clinics R9 (Sao Paulo, Brazil), 62(3), 243-250. Apisarnthanarak, A., Danchaivijitr, S., Bailey, T. C., & Fraser, V. J. (2006). Inappropriate antibiotic use in a tertiary care R10 center in Thailand: An incidence study and review of experience in Thailand. Infection Control & Hospital Epidemiology, 27(4), 416-420. Arocena, P., & Garcia-Prado, A. (2007). Accounting for quality in the measurement of hospital performance: Evidence R11 from Costa Rica. Health Economics, 16(7), 667-685. Arulmani, R., Rajendran, S. D., & Suresh, B. (2008). Adverse drug reaction monitoring in a secondary care hospital in R12 south India. British Journal of Clinical Pharmacology, 65(2), 210-216. 56 APPENDIX 8: BIBLIOGRAPHIC INFORMATION FOR ARTICLE REFERENCES ID Article Citation Bajunirwe, F., Arts, E. J., Tisch, D. J., King, C. H., Debanne, S. M., & Sethi, A. K. (2009). Adherence and treatment R13 response among HIV-1-infected adults receiving antiretroviral therapy in a rural government hospital in southwestern Uganda. Journal of the International Association of Physicians in AIDS Care: JIAPAC, 8(2), 139-147. Bakar, C., Akgun, H. S., & Al Assaf, A. F. (2008). The role of expectations in patients' hospital assessments: A Turkish R14 university hospital example. International Journal of Health Care Quality Assurance, 21(5), 503-516. Baltussen, R., & Ye, Y. (2006). Quality of care of modern health services as perceived by users and non-users in Burkina R15 Faso. International Journal for Quality in Health Care, 18(1), 30-34. Bateganya, M., Hagopian, A., Tavrow, P., Luboga, S., & Barnhart, S. (2009). Incentives and barriers to implementing R16 national hospital standards in Uganda. International Journal for Quality in Health Care, 21(6), 421-426. Bazant, E. S., Koenig, M. A., Fotso, J. C., & Mills, S. (2009). Women's use of private and government health facilities for R17 childbirth in Nairobi's informal settlements. Studies in Family Planning, 40(1), 39-50. Beghdadli, B., Ghomari, O., Taleb, M., & Fanello, S. (2010). Implementation of WHO healthcare waste management R18 (HCWM) approach in an Algerian hospital. Waste Management, 30(1), 162-163. Benkirane, R. R., R-Abouqal, R., Haimeur, C. C., S Ech Cherif El Kettani,S.S., Azzouzi, A. A., M'daghri Alaoui, A. A., Thimou, A. A., Nejmi, M. M., Maazouzi, W. W., Madani, N. N., R-Edwards, I., & Soulaymani, R. R. (2009). Incidence of R19 adverse drug events and medication errors in intensive care units: A prospective multicenter study. Journal of Patient Safety, 5(1), 16-22. Berglund, A., Lefevre-Cholay, H., Bacci, A., Blyumina, A., & Lindmark, G. (2010). Successful implementation of R20 evidence-based routines in Ukrainian maternities. Acta Obstetricia Et Gynecologica Scandinavica, 89(2), 230-237. Berhanu, S., Alemu, S., Prevett, M., & Parry, E. H. (2009). Primary care treatment of epilepsy in rural Ethiopia: Causes of R21 default from follow-up. Seizure, 18(2), 100-103. Blaya, J. A., Shin, S. S., Yagui, M. J., Yale, G., Suarez, C. Z., Asencios, L. L., Cegielski, J. P., & Fraser, H. S. (2007). A R22 web-based laboratory information system to improve quality of care of tuberculosis patients in Peru: Functional requirements, implementation and usage statistics. BMC Medical Informatics & Decision Making, 7, 33. R23 Bohomol, E., Ramos, L. H., & D'Innocenzo, M. (2009). Medication errors in an intensive care unit. Journal of Advanced Nursing, 65(6), 1259-1267. 57 APPENDIX 8: BIBLIOGRAPHIC INFORMATION FOR ARTICLE REFERENCES ID Article Citation Bossyns, P., Abache, R., Abdoulaye, M. S., Miye, H., Depoorter, A. M., & Van Lerberghe, W. (2006). Monitoring the R24 referral system through benchmarking in rural Niger: An evaluation of the functional relation between health centres and the district hospital. BMC Health Services Research, 6, 51. Bradley, E., Hartwig, K. A., Rowe, L. A., Cherlin, E. J., Pashman, J., Wong, R., Dentry, T., Wood, W. E., & Abebe, Y. R25 (2008). Hospital quality improvement in Ethiopia: A partnership-mentoring model. International Journal for Quality in Health Care, 20(6), 392-399. Brazier, E., Andrzejewski, C., Perkins, M. E., Themmen, E. M., Knight, R. J., & Bassane, B. (2009). Improving poor R26 women's access to maternity care: Findings from a primary care intervention in Burkina Faso. Social Science & Medicine, 69(5), 682-690. Bruijns, S. R., Wallis, L. A., & Burch, V. C. (2008). Effect of introduction of nurse triage on waiting times in a South R27 African emergency department. Emergency Medicine Journal, 25(7), 395-397. Buabeng, K. O., Duwiejua, M., Matowe, L. K., Smith, F., & Enlund, H. (2008). Availability and choice of antimalarials at R28 medicine outlets in Ghana: The question of access to effective medicines for malaria control. Clinical Pharmacology & Therapeutics, 84(5), 613-619. Butt, Z. A., Gilani, A. H., Nanan, D., Sheikh, A. L., & White, F. (2005). Quality of pharmacies in Pakistan: A cross- R29 sectional survey. International Journal for Quality in Health Care, 17(4), 307-313. Cesar, J. A., Matijasevich, A., Santos, I. S., Barros, A. J., Dias-da-Costa, J. S., Barros, F. C., & Victora, C. G. (2008). The R30 use of maternal and child health services in three population-based cohorts in southern Brazil, 1982-2004. Cadernos De Saude Publica, 24(Suppl 3), S427-36. Chatterjee, S., Mandal, A., Lyle, N., Mukherjee, S., & Singh, A. K. (2007). Drug utilization study in a neonatology unit of a R31 tertiary care hospital in eastern India. Pharmacoepidemiology & Drug Safety, 16(10), 1141-1145. Cho, S. H., Hwang, J. H., & Kim, J. (2008). Nurse staffing and patient mortality in intensive care units. Nursing Research, R32 57(5), 322-330. Chowdhury, S., Hossain, S. A., & Halim, A. (2009). Assessment of quality of care in maternal and newborn health services R33 available in public health care facilities in Bangladesh. Bangladesh Medical Research Council Bulletin, 35(2), 53-56. Chukwuani, C. M., Olugboji, A., Akuto, E. E., Odebunmi, A., Ezeilo, E., & Ugbene, E. (2006). A baseline survey of the R34 primary healthcare system in south eastern Nigeria. Health Policy, 77(2), 182-201. 58 APPENDIX 8: BIBLIOGRAPHIC INFORMATION FOR ARTICLE REFERENCES ID Article Citation Chung, S., Kim, S., Kim, J., & Sohn, K. (2010). Use of multiattribute utility theory for formulary management in a health R35 system. American Journal of Health-System Pharmacy, 67(2), 128-135. Costa, L. A., Valli, C., & Alvarenga, A. P. (2008). Medication dispensing errors at a public pediatric hospital. Revista R36 Latino-Americana De Enfermagem, 16(5), 812-817. Cubrilo-Turek, M., Urek, R., & Turek, S. (2006). Burnout syndrome--assessment of a stressful job among intensive care R37 staff. Collegium Antropologicum, 30(1), 131-135. Da Silva, C. E., Hoppe, A. E., Ravanello, M. M., & Mello, N. (2005). Medical wastes management in the south of Brazil. R38 Waste Management, 25(6), 600-605. Dalal, K., & Dawad, S. (2009). Non-utilization of public health care facilities: Examining the reasons through a national R39 study of women in India. Rural & Remote Health, 9(3), 1178. Dandona, L., Sisodia, P., Prasad, T. L., Marseille, E., Chalapathi Rao, M., Kumar, A. A., Kumar, S. G., Ramesh, Y. K., R40 Over, M., Someshwar, M., & Kahn, J. G. (2005). Cost and efficiency of public sector sexually transmitted infection clinics in Andhra Pradesh, India. BMC Health Services Research, 5, 69. Daviaud, E., & Chopra, M. (2008). How much is not enough? Human resources requirements for primary health care: A R41 case study from South Africa. Bulletin of the World Health Organization, 86(1), 46-51. de Paiva, S. M., & Gomes, E. L. (2007). Hospital care: Assessment of users' satisfaction during hospital stay. Revista R42 Latino-Americana De Enfermagem, 15(5), 973-979. de Villiers, L., Kalula, S. Z., & Burch, V. C. (2009). Does multidisciplinary stroke care improve outcome in a secondary- R43 level hospital in South Africa?. International Journal of Stroke, 4(2), 89-93. Dizdar, O., Karadag, O., Kalyoncu, U., Kurt, M., Ulger, Z., Sardan, Y. C., & Unal, S. (2007). Appropriate utilization of R44 hospital beds in internal medicine: Evaluation in a tertiary care hospital. Journal of Evaluation in Clinical Practice, 13(3), 408-411. Drabo, K. M., Dauby, C., Coste, T., Dembele, M., Hien, C., Ouedraogo, A., Macq, J., Ouedraogo, J. B., & Dujardin, B. R45 (2006). Decentralising tuberculosis case management in two districts of Burkina Faso. International Journal of Tuberculosis & Lung Disease, 10(1), 93-98. 59 APPENDIX 8: BIBLIOGRAPHIC INFORMATION FOR ARTICLE REFERENCES ID Article Citation Duerink, D. O., Roeshadi, D., Wahjono, H., Lestari, E. S., Hadi, U., Wille, J. C., De Jong, R. M., Nagelkerke, N. J., Van den R46 Broek, P. J., & Study Group 'Antimicrobial Resistance In Indonesia Prevalence and Prevention', Amrin. (2006). Surveillance of healthcare-associated infections in Indonesian hospitals. Journal of Hospital Infection, 62(2), 219-229. Duke, T., Keshishiyan, E., Kuttumuratova, A., Ostergren, M., Ryumina, I., Stasii, E., Weber, M. W., & Tamburlini, G. R47 (2006). Quality of hospital care for children in Kazakhstan, Republic of Moldova, and Russia: Systematic observational assessment. Lancet, 367(9514), 919-925. El Derea, H., Salem, E., Fawzi, M., & Abdel Azeem, M. (2008). Safety of patient meals in 2 hospitals in Alexandria, Egypt R48 before and after training of food handlers. Eastern Mediterranean Health Journal, 14(4), 941-952. El-Jardali, F., Jamal, D., Dimassi, H., Ammar, W., & Tchaghchaghian, V. (2008). The impact of hospital accreditation on R49 quality of care: Perception of Lebanese nurses. International Journal for Quality in Health Care, 20(5), 363-371. Emam, S. A., Nabawy, Z. M., Mohamed, A. H., & Sbeira, W. H. (2005). Assessment of nurses' work climate at Alexandria R50 Main University Hospital. Journal of the Egyptian Public Health Association, 80(1-2), 233-262. Erdinc, F. S., Yetkin, M. A., Ataman Hatipoglu, C., Yucel, M., Karakoc, A. E., Cevik, M. A., & Tulek, N. (2006). Five-year R51 surveillance of nosocomial infections in Ankara Training and Research Hospital. Journal of Hospital Infection, 64(4), 391- 396. Erdur, B., Ergin, A., Turkcuer, I., Parlak, I., Ergin, N., & Boz, B. (2006). A study of depression and anxiety among doctors R52 working in emergency units in Denizli, Turkey. Emergency Medicine Journal, 23(10), 759-763. Eryilmaz, G. (2006). The evaluation of family planning services given in Erzurum mother-child health and family planning R53 center in eastern Turkey. European Journal of Contraception & Reproductive Health Care, 11(2), 146-150 Faisel, H., Pittrof, R., El-Hosini, M., Habib, M., & Azzam, E. (2009). Using standard primipara method to compare the R54 quality of maternity care in Cairo and London. Journal of Obstetrics & Gynaecology, 29(4), 284-287. Farzandipour, M., & Sheikhtaheri, A. (2009). Evaluation of factors influencing accuracy of principal procedure coding R55 based on ICD-9-CM: An Iranian study. Perspectives in Health Information Management, 6, 5. Ferrer, L. M., Cianelli, R., Norr, K. F., Cabieses, B., Araya, A., Irarrazabal, L., & Bernales, M. (2009). Observed use of R56 standard precautions in Chilean community clinics. Public Health Nursing, 26(5), 440-448. 60 APPENDIX 8: BIBLIOGRAPHIC INFORMATION FOR ARTICLE REFERENCES ID Article Citation Flessa, S. (2005). Hospital development plans: A new tool to break ground for strategic thinking in Tanzanian hospitals. R57 European Journal of Health Economics, 6(4), 322-326. Flores Pena, Y., R de la Gala, S. E., & Cerda-Flores, R. M. (2009). Maternal satisfaction with maternal-infant nursing care R58 in Campeche, Mexico. Revista Latino-Americana De Enfermagem, 17(5), 645-650. Fung, I. C., Guinness, L., Vickerman, P., Watts, C., Vannela, G., Vadhvana, J., Foss, A. M., Malodia, L., Gandhi, M., & R59 Jani, G. (2007). Modelling the impact and cost-effectiveness of the HIV intervention programme amongst commercial sex workers in Ahmedabad, Gujarat, India. BMC Public Health, 7, 195. Furtado, G. H., Santana, S. L., Coutinho, A. P., Perdiz, L. B., Wey, S. B., & Medeiros, E. A. (2006). Compliance with R60 handwashing at two intensive care units in Sao Paulo. Brazilian Journal of Infectious Diseases, 10(1), 33-35. Gallesio, A. O., Ceraso, D., & Palizas, F. (2006). Improving quality in the intensive care unit setting. Critical Care Clinics, R61 22(3), 547-571. Ghimire, S., Nepal, S., Bhandari, S., Nepal, P., & Palaian, S. (2009). A prospective surveillance of drug prescribing and R62 dispensing in a teaching hospital in western Nepal. JPMA - Journal of the Pakistan Medical Association, 59(10), 726-731. Gomez Ponce de Leon, R., Billings, D. L., & Barrionuevo, K. (2006). Woman-centered post-abortion care in public R63 hospitals in Tucuman, Argentina: Assessing quality of care and its link to human rights. Health & Human Rights, 9(1), 174- 201. Gosselin, R. A., & Heitto, M. (2008). Cost-effectiveness of a district trauma hospital in Battambang, Cambodia. World R64 Journal of Surgery, 32(11), 2450-2453. Gosselin, R. A., Thind, A., & Bellardinelli, A. (2006). Cost/DALY averted in a small hospital in Sierra Leone: What is the R65 relative contribution of different services? World Journal of Surgery, 30(4), 505-511. Gouws, E., Bryce, J., Pariyo, G., Armstrong Schellenberg, J., Amaral, J., & Habicht, J. P. (2005). Measuring the quality of R66 child health care at first-level facilities. Social Science & Medicine, 61(3), 613-625. Groene, O., Klazinga, N., Kazandjian, V., Lombrail, P., & Bartels, P. (2008). The World Health Organization performance R67 assessment tool for quality improvement in hospitals (PATH): An analysis of the pilot implementation in 37 hospitals. International Journal for Quality in Health Care, 20(3), 155-161. 61 APPENDIX 8: BIBLIOGRAPHIC INFORMATION FOR ARTICLE REFERENCES ID Article Citation Gulalp, B., Karcioglu, O., Koseoglu, Z., & Sari, A. (2009). Dangers faced by emergency staff: Experience in urban centers R68 in southern Turkey. Ulusal Travma Ve Acil Cerrahi Dergisi = Turkish Journal of Trauma & Emergency Surgery: TJTES, 15(3), 239-242. Hadi U. Duerink DO. Lestari ES. Nagelkerke NJ. Keuter M. Huis In't Veld D. Suwandojo E. Rahardjo E. van den Broek P. R69 Gyssens IC. (2008). Antimicrobial resistance in Indonesia: Prevalence and prevention. Clinical Microbiology & Infection, 14(7), 698-707. Hajialiafzali, H., Moss, J. R., & Mahmood, M. A. (2007). Efficiency measurement for hospitals owned by the Iranian social R70 security organisation. Journal of Medical Systems, 31(3), 166-172. Hariharan, S., Chen, D., Merritt-Charles, L., Bobb, N., De Freitas, L., Esdelle-Thomas, A., Mohamed, J., Charles, D., R71 Colley, K., & Renaud, E. (2007). An evaluation of the intensive care unit resources and utilization in Trinidad. West Indian Medical Journal, 56(2), 144-151. Hariharan, S., Dey, P. K., Chen, D. R., Moseley, H. S., & Kumar, A. Y. (2005). Application of analytic hierarchy process R72 for measuring and comparing the global performance of intensive care units. Journal of Critical Care, 20(2), 117-124. Harling, G., Orrell, C., & Wood, R. (2007). Healthcare utilization of patients accessing an African national treatment R73 program. BMC Health Services Research, 7, 80. Hartwig, K., Pashman, J., Cherlin, E., Dale, M., Callaway, M., Czaplinski, C., Wood, W. E., Abebe, Y., Dentry, T., & R74 Bradley, E. H. (2008). Hospital management in the context of health sector reform: A planning model in Ethiopia. International Journal of Health Planning & Management, 23(3), 203-218. Hasna, F. S. (2006). Utilization of family planning services in the governorate of Zarqa, Jordan. Journal of Transcultural R75 Nursing, 17(4), 365-374. Hollisaaz, M. T., Noorbala, M. H., Irani, N., Bahaeloo-Horeh, S., Assari, S., Saadat, S. H., Araghizadeh, H., & Rezaie, Y. R76 (2007). Severity of chronic pain affects health care utilization after kidney transplantation. Transplantation Proceedings, 39(4), 1122-1125. Hong, H. S., Kim, I. K., Lee, S. H., & Kim, H. S. (2009). Adoption of a PDA-based home hospice care system for cancer R77 patients. CIN: Computers, Informatics, Nursing, 27(6), 365-371. Hongoro, C., McPake, B., & Vickerman, P. (2005). Measuring the quality of hospital tuberculosis services: A prospective R78 study in four Zimbabwe hospitals. International Journal for Quality in Health Care, 17(4), 287-292. 62 APPENDIX 8: BIBLIOGRAPHIC INFORMATION FOR ARTICLE REFERENCES ID Article Citation Hoppenbrouwer, J., & Kanyengo, C. W. (2007). Current access to health information in Zambia: A survey of selected health R79 institutions. Health Information & Libraries Journal, 24(4), 246-256. Hwang, J. I., & Chang, H. (2009). Work climate perception and turnover intention among Korean hospital staff. R80 International Nursing Review, 56(1), 73-80. Imam, S. Z., Syed, K. S., Ali, S. A., Ali, S. U., Fatima, K., Gill, M., Hassan, M. O., Hashmi, S. H., Siddiqi, M. T., Khan, H. R81 M., & Jameel, O. F. (2007). Patients' satisfaction and opinions of their experiences during admission in a tertiary care hospital in Pakistan - a cross sectional study. BMC Health Services Research, 7, 161. Ishengoma, D. R., Rwegoshora, R. T., Mdira, K. Y., Kamugisha, M. L., Anga, E. O., Bygbjerg, I. C., Ronn, A. M., & R82 Magesa, S. M. (2009). Health laboratories in the Tanga region of Tanzania: The quality of diagnostic services for malaria and other communicable diseases. Annals of Tropical Medicine & Parasitology, 103(5), 441-453. Iyengar, K., & Iyengar, S. D. (2009). Emergency obstetric care and referral: Experience of two midwife-led health centres in R83 rural Rajasthan, India. Reproductive Health Matters, 17(33), 9-20. Jain, S., Basu, S., & Parmar, V. R. (2009). Medication errors in neonates admitted in intensive care unit and emergency R84 department. Indian Journal of Medical Sciences, 63(4), 145-151. Jeong, I., Cho, J., & Park, S. (2008). Compliance with standard precautions among operating room nurses in South Korea. R85 American Journal of Infection Control, 36(10), 739-742. Jeong, S. H., Lee, T., Kim, I. S., Lee, M. H., & Kim, M. J. (2007). The effect of nurses' use of the principles of learning R86 organization on organizational effectiveness. Journal of Advanced Nursing, 58(1), 53-62. Johns, B., Probandari, A., Mahendradhata, Y., & Ahmad, R. A. (2009). An analysis of the costs and treatment success of R87 collaborative arrangements among public and private providers for tuberculosis control in Indonesia. Health Policy, 93(2-3), 214-224. Jung, M., Lee, K. H., & Choi, M. (2009). Perceived service quality among outpatients visiting hospitals and clinics and their R88 willingness to re-utilize the same medical institutions. Journal of Preventive Medicine & Public Health / Yebang Uihakhoe Chi, 42(3), 151-159. Kang, H. Y., Kim, S. J., Cho, W., & Lee, S. (2009). Consumer use of publicly released hospital performance information: R89 Assessment of the national hospital evaluation program in Korea. Health Policy, 89(2), 174-183. 63 APPENDIX 8: BIBLIOGRAPHIC INFORMATION FOR ARTICLE REFERENCES ID Article Citation Karolinski, A., Micone, P., Mercer, R., Gibbons, L., Althabe, F., Belizan, J. M., Messina, A., Lapidus, A., Correa, A., Taddeo, C., Lambruschini, R., Bertin, M., Dibiase, L., Montes Varela, D., Laterra, C., & AMBA Perinatal Network R90 Research, G. (2009). Evidence-based maternal and perinatal healthcare practices in public hospitals in Argentina. International Journal of Gynaecology & Obstetrics, 105(2), 118-122. Karunamoorthi, K., Rajalakshmi, M., Babu, S. M., & Yohannes, A. (2009). HIV/AIDS patient's satisfactory and their R91 expectations with pharmacy service at specialist antiretroviral therapy (ART) units. European Review for Medical & Pharmacological Sciences, 13(5), 331-339. Kaur, S., Sharma, R., Talwar, R., Verma, A., & Singh, S. (2009). A study of job satisfaction and work environment R92 perception among doctors in a tertiary hospital in Delhi. Indian Journal of Medical Sciences, 63(4), 139-144. Kijsanayotin, B., Pannarunothai, S., & Speedie, S. M. (2009). Factors influencing health information technology adoption in R93 Thailand's community health centers: Applying the UTAUT model. International Journal of Medical Informatics, 78(6), 404-416. Kikwilu, E. N., Frencken, J. E., & Mulder, J. (2009). Barriers to the adoption of the ART approach as perceived by dental R94 practitioners in governmental dental clinics, in Tanzania. Journal of Applied Oral Science, 17(5), 408-413. Kim, C. W., Lee, S. Y., & Hong, S. C. (2005). Equity in utilization of cancer inpatient services by income classes. Health R95 Policy, 72(2), 187-200. Kim, J. M., Koh, K. W., Yu, B. C., Jeon, M. J., Kim, Y. J., & Kim, Y. H. (2009). Assessment of community capacity R96 building ability of health promotion workers in public health centers. Journal of Preventive Medicine & Public Health / Yebang Uihakhoe Chi, 42(5), 283-292. Kirigia, J. M., Emrouznejad, A., Cassoma, B., Asbu, E. Z., & Barry, S. (2008). A performance assessment method for R97 hospitals: The case of municipal hospitals in Angola. Journal of Medical Systems, 32(6), 509-519. Kongnyuy, E. J., Mlava, G., & van den Broek, N. (2009). Criteria-based audit to improve women-friendly care in maternity R98 units in Malawi. Journal of Obstetrics & Gynaecology Research, 35(3), 483-489. Kotagal, M., Lee, P., Habiyakare, C., Dusabe, R., Kanama, P., Epino, H. M., Rich, M. L., & Farmer, P. E. (2009). R99 Improving quality in resource poor settings: Observational study from rural Rwanda. BMJ, 339, b3488. Kruk, M. E., Paczkowski, M., Mbaruku, G., de Pinho, H., & Galea, S. (2009). Women's preferences for place of delivery in R100 rural Tanzania: A population-based discrete choice experiment. American Journal of Public Health, 99(9), 1666-1672. 64 APPENDIX 8: BIBLIOGRAPHIC INFORMATION FOR ARTICLE REFERENCES ID Article Citation Lee, K. S., Chun, K. H., & Lee, J. S. (2008). Reforming the hospital service structure to improve efficiency: Urban hospital R101 specialization. Health Policy, 87(1), 41-49. Lei, Z., Qiongjing, Y., Qiuli, W., Sabrina, K., Xiaojing, L., & Changli, W. (2009). Sleep quality and sleep disturbing factors R102 of inpatients in a Chinese general hospital. Journal of Clinical Nursing, 18(17), 2521-2529. Lerner, R. B., Carvalho, M., Vieira, A. A., Lopes, J. M., & Moreira, M. E. (2008). Medication errors in a neonatal intensive R103 care unit. Jornal De Pediatria, 84(2), 166-170. Levin, M. E. (2006). Language as a barrier to care for xhosa-speaking patients at a South African paediatric teaching R104 hospital. South African Medical Journal (Suid-Afrikaanse Tydskrif Vir Geneeskunde), 96(10), 1076-1079. Lufesi, N. N., Andrew, M., & Aursnes, I. (2007). Deficient supplies of drugs for life threatening diseases in an African R105 community. BMC Health Services Research, 7, 86. MacLeod, J. B., Gravelin, S., Jones, T., Gololov, A., Thomas, M., Omondi, B., & Bukusi, E. (2009). Assessment of acute R106 trauma care training in Kenya. American Surgeon, 75(11), 1118-1123. Mashauri, F. M., Siza, J. E., Temu, M. M., Mngara, J. T., Kishamawe, C., & Changalucha, J. M. (2007). Assessment of R107 quality assurance in HIV testing in health facilities in Lake Victoria zone, Tanzania. Tanzania Health Research Bulletin, 9(2), 110-114. Masiye, F. (2007). Investigating health system performance: An application of data envelopment analysis to Zambian R108 hospitals. BMC Health Services Research, 7, 58. Masiye, F., Kirigia, J. M., Emrouznejad, A., Sambo, L. G., Mounkaila, A., Chimfwembe, D., & Okello, D. (2006). Efficient R109 management of health centres human resources in Zambia. Journal of Medical Systems, 30(6), 473-481. Mathews, C., Guttmacher, S. J., Flisher, A. J., Mtshizana, Y. Y., Nelson, T., McCarthy, J., & Daries, V. (2009). The quality R110 of HIV testing services for adolescents in Cape Town, South Africa: Do adolescent-friendly services make a difference? Journal of Adolescent Health, 44(2), 188-190. Mbembati, N. A., Mwangu, M., Muhondwa, E. P., & Leshabari, M. M. (2008). Performance indicators for quality in R111 surgical and laboratory services at Muhimbili National Hospital (MNH) in Tanzania. East African Journal of Public Health, 5(1), 13-16. 65 APPENDIX 8: BIBLIOGRAPHIC INFORMATION FOR ARTICLE REFERENCES ID Article Citation Mbonye, A. K., Mutabazi, M. G., Asimwe, J. B., Sentumbwe, O., Kabarangira, J., Nanda, G., & Orinda, V. (2007). R112 Declining maternal mortality ratio in Uganda: Priority interventions to achieve the millennium development goal. International Journal of Gynaecology & Obstetrics, 98(3), 285-290. McCarthy, K. D., Metchock, B., Kanphukiew, A., Monkongdee, P., Sinthuwattanawibool, C., Tasaneeyapan, T., Rienthong, S., Ngamlert, K., Srisuwanvilai, L. O., & Varma, J. K. (2008). Monitoring the performance of mycobacteriology R113 laboratories: A proposal for standardized indicators. International Journal of Tuberculosis & Lung Disease, 12(9), 1015- 1020. Mehrabi, F., Nasiripour, A., & Delgoshaei, B. (2008). Customer focus level following implementation of quality R114 improvement model in Tehran social security hospitals. International Journal of Health Care Quality Assurance, 21(6), 562- 568. Mepham, S. O., Squire, S. B., Chisuwo, L., Kandulu, J., & Bates, I. (2009). Utilisation of laboratory services by health R115 workers in a district hospital in Malawi. Journal of Clinical Pathology, 62(10), 935-938. Meuwissen, L. E., Gorter, A. C., Kester, A. D., & Knottnerus, J. A. (2006). Does a competitive voucher program for R116 adolescents improve the quality of reproductive health care? A simulated patient study in Nicaragua. BMC Public Health, 6, 204. Mfinanga, S. G., Kahwa, A., Kimaro, G., Kilale, A., Kivuyo, S., Senkoro, M., Ngowi, B., Mtandu, R., Mutayoba, B., R117 Ngadaya, E., & Mashoto, K. (2008). Patient's dissatisfaction with the public and private laboratory services in conducting HIV related testing in Tanzania. BMC Health Services Research, 8, 167. Miasso, A. I., Oliveira, R. C., Silva, A. E., Lyra Junior, D. P., Gimenes, F. R., Fakih, F. T., & Cassiani, S. H. (2009). R118 Prescription errors in Brazilian hospitals: A multi-centre exploratory survey. Cadernos De Saude Publica, 25(2), 313-320. Mohamed Soliman, S., & Ibrahim Ahmed, A. (2007). Overview of biomedical waste management in selected governorates R119 in Egypt: A pilot study. Waste Management, 27(12), 1920-1923. Mohammadi, S. M., Mohammadi, S. F., Hedges, J. R., Zohrabi, M., & Ameli, O. (2007). Introduction of a quality R120 improvement program in a children's hospital in Tehran: Design, implementation, evaluation and lessons learned. International Journal for Quality in Health Care, 19(4), 237-243. 66 APPENDIX 8: BIBLIOGRAPHIC INFORMATION FOR ARTICLE REFERENCES ID Article Citation Motta, M. C., Villa, T. C., Golub, J., Kritski, A. L., Ruffino-Netto, A., Silva, D. F., Harter, R. G., & Scatena, L. M. (2009). R121 Access to tuberculosis diagnosis in Itaborai City, Rio de Janeiro, Brazil: The patient's point of view. International Journal of Tuberculosis & Lung Disease, 13(9), 1137-1141. Mubyazi, G., Massaga, J., Kamugisha, M., Mubyazi, J. N., Magogo, G. C., Mdira, K. Y., Gesase, S., & Sukwa, T. (2006). R122 User charges in public health facilities in tanzania: Effect on revenues, quality of services and people's health-seeking behaviour for malaria illnesses in Korogwe district. Health Services Management Research, 19(1), 23-35. Muhondwa, E. P., Leshabari, M. T., Mwangu, M., Mbembati, N., & Ezekiel, M. J. (2008). Patient satisfaction at the R123 Muhimbili National Hospital in Dar es Salaam, Tanzania. East African Journal of Public Health, 5(2), 67-73. Mwangi, R., Chandler, C., Nasuwa, F., Mbakilwa, H., Poulsen, A., Bygbjerg, I. C., & Reyburn, H. (2008). Perceptions of R124 mothers and hospital staff of paediatric care in 13 public hospitals in northern Tanzania. Transactions of the Royal Society of Tropical Medicine & Hygiene, 102(8), 805-810. Nabyonga-Orem, J., Karamagi, H., Atuyambe, L., Bagenda, F., Okuonzi, S. A., & Walker, O. (2008). Maintaining quality of R125 health services after abolition of user fees: A Uganda case study. BMC Health Services Research, 8, 102. Newlands, D., Yugbare-Belemsaga, D., Ternent, L., Hounton, S., & Chapman, G. (2008). Assessing the costs and cost- R126 effectiveness of a skilled care initiative in rural Burkina Faso. Tropical Medicine & International Health, 13(Suppl 1), 61-67. Nguyen, M. H., Gammeltoft, T., & Rasch, V. (2007). Situation analysis of quality of abortion care in the main maternity R127 hospital in Hai Phong, Viet Nam. Reproductive Health Matters, 15(29), 172-182. Njogu, J., Akhwale, W., Hamer, D. H., & Zurovac, D. (2008). Health facility and health worker readiness to deliver new R128 national treatment policy for malaria in Kenya. East African Medical Journal, 85(5), 213-221. Oh, H. S., Cheong, H. W., Yi, S. E., Kim, H., Choe, K. W., & Cho, S. I. (2007). Development and application of evaluation R129 indices for hospital infection surveillance and control programs in the Republic of Korea. Infection Control & Hospital Epidemiology, 28(4), 435-445. Oliveras, E., Larsen, U., & David, P. H. (2005). Client satisfaction with abortion care in three Russian cities. Journal of R130 Biosocial Science, 37(5), 585-601. Opondo, C., Ntoburi, S., Wagai, J., Wafula, J., Wasunna, A., Were, F., Wamae, A., Migiro, S., Irimu, G., & English, M. R131 (2009). Are hospitals prepared to support newborn survival? An evaluation of eight first-referral level hospitals in Kenya. Tropical Medicine & International Health, 14(10), 1165-1172. 67 APPENDIX 8: BIBLIOGRAPHIC INFORMATION FOR ARTICLE REFERENCES ID Article Citation Orji, E. O., Ojofeitimi, E. O., Esimai, A. O., Adejuyigbe, E., Adeyemi, A. B., & Owolabi, O. O. (2006). Assessment of R132 delays in receiving delivery care at a tertiary healthcare delivery centre in Nigeria. Journal of Obstetrics & Gynaecology, 26(7), 643-644. Osman, H., Campbell, O. M., & Nassar, A. H. (2009). Using emergency obstetric drills in maternity units as a performance R133 improvement tool. Birth, 36(1), 43-50. Otis, K. E., & Brett, J. A. (2008). Barriers to hospital births: Why do many Bolivian women give birth at home? Pan R134 American Journal of Public Health, 24(1), 46-53. Pan, X., Dib, H. H., Wang, X., & Zhang, H. (2006). Service utilization in community health centers in China: A comparison R135 analysis with local hospitals. BMC Health Services Research, 6, 93. Panicker, G. K., Karnad, D. R., Joshi, R., Shetty, S., Vyas, N., Kothari, S., & Narula, D. (2009). Z-score for benchmarking R136 reader competence in a central ECG laboratory. Annals of Noninvasive Electrocardiology, 14(1), 19-25. Passarelli, M. C., Jacob-Filho, W., & Figueras, A. (2005). Adverse drug reactions in an elderly hospitalised population: R137 Inappropriate prescription is a leading cause. Drugs & Aging, 22(9), 767-777. Peabody, J. W., Florentino, J., Shimkhada, R., Solon, O., & Quimbo, S. (2010). Quality variation and its impact on costs and R138 satisfaction: Evidence from the QIDS study. Medical Care, 48(1), 25-30. Pillay, R. (2008). The skills gap in hospital management in the South African public health sector. Journal of Public Health R139 Management & Practice, 14(5), E8-14. Pilyavsky, A., & Staat, M. (2006). Health care in the CIS countries : The case of hospitals in Ukraine. European Journal of R140 Health Economics, 7(3), 189-195. Pinidiyapathirage, M. J., & Wickremasinghe, A. R. (2007). Antenatal care provided and its quality in field clinics in R141 Gampaha district, Sri Lanka. Asia-Pacific Journal of Public Health, 19(3), 38-44. Pitchforth, E., van Teijlingen, E., Graham, W., Dixon-Woods, M., & Chowdhury, M. (2006). Getting women to hospital is R142 not enough: A qualitative study of access to emergency obstetric care in Bangladesh. Quality & Safety in Health Care, 15(3), 214-219. 68 APPENDIX 8: BIBLIOGRAPHIC INFORMATION FOR ARTICLE REFERENCES ID Article Citation Pongsupap, Y., & Van Lerberghe, W. (2006). Choosing between public and private or between hospital and primary care: R143 Responsiveness, patient-centredness and prescribing patterns in outpatient consultations in Bangkok. Tropical Medicine & International Health, 11(1), 81-89. Pourasghar, F., Malekafzali, H., Kazemi, A., Ellenius, J., & Fors, U. (2008). What they fill in today, may not be useful R144 tomorrow: Lessons learned from studying medical records at the women hospital in Tabriz, Iran. BMC Public Health, 8, 139. Ramani, K. V. (2006). Managing hospital supplies: Process reengineering at Gujarat Cancer Research Institute, India. R145 Journal of Health Organization & Management, 20(2-3), 218-226. Rattanachotphanit, T., Limwattananon, C., Limwattananon, S., Johns, J. R., Schommer, J. C., & Brown, L. M. (2008). R146 Assessing the efficiency of hospital pharmacy services in Thai public district hospitals. Southeast Asian Journal of Tropical Medicine & Public Health, 39(4), 753-765. Renaudin, P., Prual, A., Vangeenderhuysen, C., Ould Abdelkader, M., Ould Mohamed Vall, M., & Ould El Joud, D. (2007). R147 Ensuring financial access to emergency obstetric care: Three years of experience with obstetric risk insurance in Nouakchott, Mauritania. International Journal of Gynaecology & Obstetrics, 99(2), 183-190. Renner, A., Kirigia, J. M., Zere, E. A., Barry, S. P., Kirigia, D. G., Kamara, C., & Muthuri, L. H. (2005). Technical R148 efficiency of peripheral health units in Pujehun district of Sierra Leone: A DEA application. BMC Health Services Research, 5, 77. Richard, F., Ouedraogo, C., Zongo, V., Ouattara, F., Zongo, S., Gruenais, M. E., & De Brouwere, V. (2009). The difficulty R149 of questioning clinical practice: Experience of facility-based case reviews in Ouagadougou, Burkina Faso. BJOG: An International Journal of Obstetrics & Gynaecology, 116(1), 38-44. Rowe, A. K., de Leon, G. F., Mihigo, J., Santelli, A. C., Miller, N. P., & Van-Dunem, P. (2009). Quality of malaria case R150 management at outpatient health facilities in Angola. Malaria Journal, 8(1), 275. Roy, C., Das, J. K., Jha, H. K., Bhattacharya, V., Shivdasani, J. P., & Nandan, D. (2009). Logistics and supply management R151 system of drugs at different levels in Darbhanga district of Bihar. Indian Journal of Public Health, 53(3), 147-150. Rutebemberwa, E., Ekirapa-Kiracho, E., Okui, O., Walker, D., Mutebi, A., & Pariyo, G. (2009). Lack of effective R152 communication between communities and hospitals in Uganda: A qualitative exploration of missing links. BMC Health Services Research, 9, 146. 69 APPENDIX 8: BIBLIOGRAPHIC INFORMATION FOR ARTICLE REFERENCES ID Article Citation Sahin, B., Yilmaz, F., & Lee, K. H. (2007). Factors affecting inpatient satisfaction: Structural equation modeling. Journal of R153 Medical Systems, 31(1), 9-16. Sallam, S. A., Khallafallah, N. M., Ibrahim, N. K., & Okasha, A. O. (2009). Pharmacoepidemiological study of self- R154 medication in adults attending pharmacies in Alexandria, Egypt. Eastern Mediterranean Health Journal, 15(3), 683-691. Sangthong, K., Soparat, P., Moongtui, W., & Danchaivijitr, S. (2005). Development of quality indicators for sterilization R155 practices of the central sterile supply department. Journal of the Medical Association of Thailand, 88(Suppl 10), S128-32. Sarkar, A. P., Biswas, R., & Tripathi, S. K. (2007). A study on drug use in a district hospital of West Bengal. Indian Journal R156 of Public Health, 51(1), 75-76. Saxena, R., Katoch, S. C., Srinivas, U., Rao, S., & Anand, H. (2007). Impact of external haematology proficiency testing R157 programme on quality of laboratories. Indian Journal of Medical Research, 126(5), 428-432. Seiber, E. E., & Robinson, A. L. (2007). Microfinance investments in quality at private clinics in Uganda: A case-control R158 study. BMC Health Services Research, 7, 168. Seiber, E. E., Hotchkiss, D. R., Rous, J. J., & Berruti, A. A. (2005). Maternal and child health and family planning service R159 utilization in Guatemala: Implications for service integration. Social Science & Medicine, 61(2), 279-291. Semin, S., Demiral, Y., & Dicle, O. (2006). Trends in diagnostic imaging utilization in a university hospital in Turkey. R160 International Journal of Technology Assessment in Health Care, 22(4), 532-536. Shankar, P. R., Upadhyay, D. K., Subish, P., Dubey, A. K., & Mishra, P. (2006). Prescribing patterns among paediatric R161 inpatients in a teaching hospital in western Nepal. Singapore Medical Journal, 47(4), 261-265. Shrestha, M., Manandhar, D. S., Dhakal, S., & Nepal, N. (2006). Two year audit of perinatal mortality at Kathmandu R162 Medical College Teaching Hospital. Kathmandu University Medical Journal, 4(2), 176-181. Siddiqui, N., & Khandaker, S. A. (2007). Comparison of services of public, private and foreign hospitals from the R163 perspective of Bangladeshi patients. Journal of Health, Population & Nutrition, 25(2), 221-230. Solon, O., Woo, K., Quimbo, S. A., Shimkhada, R., Florentino, J., & Peabody, J. W. (2009). A novel method for measuring R164 health care system performance: Experience from QIDS in the Philippines. Health Policy & Planning, 24(3), 167-174. Son, N. T., Thu, N. H., Tu, N. T., & Mock, C. (2007). Assessment of the status of resources for essential trauma care in R165 Hanoi and Khanh Hoa, Vietnam. Injury, 38(9), 1014-1022. 70 APPENDIX 8: BIBLIOGRAPHIC INFORMATION FOR ARTICLE REFERENCES ID Article Citation Srivastava, R. K., Kansal, S., Tiwari, V. K., Piang, L., Chand, R., & Nandan, D. (2009). Assessment of utilization of RCH R166 services and client satisfaction at different level of health facilities in Varanasi district. Indian Journal of Public Health, 53(3), 183-189. Stanback, J., Griffey, S., Lynam, P., Ruto, C., & Cummings, S. (2007). Improving adherence to family planning guidelines R167 in Kenya: An experiment. International Journal for Quality in Health Care, 19(2), 68-73. Su, T. T., & Sax, S. (2009). Key quality aspect: A fundamental step for quality improvement in a resource-poor setting. R168 Asia-Pacific Journal of Public Health, 21(4), 477-486. Taner, T., & Antony, J. (2006). Comparing public and private hospital care service quality in Turkey. International Journal R169 of Health Care Quality Assurance Incorporating Leadership in Health Services, 19(2-3), i-x. Tibbets, M. W., Gomez, R., Kannangai, R., & Sridharan, G. (2006). Total quality management in clinical virology R170 laboratories. Indian Journal of Medical Microbiology, 24(4), 258-262. Turan, J. M., Bulut, A., Nalbant, H., Ortayli, N., & Akalin, A. A. (2006). The quality of hospital-based antenatal care in R171 Istanbul. Studies in Family Planning, 37(1), 49-60. Van Hemelrijck, M. J., Lindblade, K. A., Kubaje, A., Hamel, M. J., Odhiambo, F., Phillips-Howard, P. A., Laserson, K. F., R172 Slutsker, L., & Feikin, D. R. (2009). Trends observed during a decade of paediatric sick visits to peripheral health facilities in rural western Kenya, 1997-2006. Tropical Medicine & International Health, 14(1), 62-69. Vaz, F. S., Ferreira, A. M., Kulkarni, M. S., & Motghare, D. D. (2007). Bed utilization indices at a tertiary care hospital in R173 Goa: An eight year trend analysis. Indian Journal of Public Health, 51(4), 231-233. Wamwana, E. B., Ndavi, P. M., Gichangi, P. B., Karanja, J. G., Muia, E. G., & Jaldesa, G. W. (2007). Quality of record R174 keeping in the intrapartum period at the provincial general hospital, Kakamega, Kenya. East African Medical Journal, 84(1), 16-23. Wong, R., & Bradley, E. H. (2009). Developing patient registration and medical records management system in Ethiopia. R175 International Journal for Quality in Health Care, 21(4), 253-258. Wouters, E., Heunis, C., van Rensburg, D., & Meulemans, H. (2008). Patient satisfaction with antiretroviral services at R176 primary health-care facilities in the Free State, South Africa: A two-year study using four waves of cross-sectional data. BMC Health Services Research, 8, 210. 71 APPENDIX 8: BIBLIOGRAPHIC INFORMATION FOR ARTICLE REFERENCES ID Article Citation Xu, K., Evans, D. B., Kadama, P., Nabyonga, J., Ogwal, P. O., Nabukhonzo, P., & Aguilar, A. M. (2006). Understanding the R177 impact of eliminating user fees: Utilization and catastrophic health expenditures in Uganda. Social Science & Medicine, 62(4), 866-876. Yan, Y., Zhang, G., Chen, Y., Zhang, A., Guan, Y., & Ao, H. (2006). Study on the injection practices of health facilities in R178 Jingzhou district, Hubei, China. Indian Journal of Medical Sciences, 60(10), 407-416. Yildirim, C., Kocoglu, H., Goksu, S., Gunay, N., & Savas, H. (2005). Patient satisfaction in a university hospital emergency R179 department in Turkey. Acta Medica (Hradec Kralove), 48(1), 59-62. Yunuswangsa, Q., Nimmaanrat, S., & Wasinwong, W. (2008). Completion and accuracy in charting of anesthetic records in R180 Songklanagarind Hospital. Journal of the Medical Association of Thailand, 91(7), 1002-1010. Zhang, M., Wang, H., Miao, J., Du, X., Li, T., & Wu, Z. (2009). Occupational exposure to blood and body fluids among R181 health care workers in a general hospital, China. American Journal of Industrial Medicine, 52(2), 89-98. 72 APPENDIX 9: REFERENCES FOR ASSESSING METRIC RELIABILITY AND VALIDITY Carmines, E. and R. Zeller. 1979. Reliability and Validity Assessment. Sage University Paper Series on Quantitative Applications in the Social Sciences, 07-004. Newbury Park, CA: Sage. Fink, Arlene. 2005. Evaluation Fundamentals: Insights in the Outcomes, Effectiveness, and Quality of Health Programs. 2nd edition. Thousand Oaks, CA: Sage Publications. Kirk, J. and M. Miller. 1986. Reliability and Validity in Qualitative Research. Newbury Park, CA: Sage Publications. Litwin, M. 1995. How to Measure Survey Reliability and Validity. Thousand Oaks, CA: Sage Publications. Litwin, M. 2002. How to Assess and Interpret Survey Psychometrics. Thousand Oaks, CA: Sage Publications. McDowell, I. 2006. Measuring Health: A Guide to Rating Scales and Questionnaires. 3rd edition. New York: Oxford University Press. Streiner, D. and G. Norman. 2008. Health Measurement Scales: A Practical Guide to Their Development and Use. 4th edition. New York: Oxford University Press. 73 APPENDIX 10: REFERENCES FOR USE OF QUALITATIVE AND MIXED METHODS Curry, L., R. Shield and T. Wetle (Eds.). 2006. Improving Aging and Public Health Research: Qualitative and Mixed Methods. Washington, DC: American Public Health Association and Gerontological Society of America. Curry, L., I. Nembhard and E. Bradley. 2009. "Qualitative and Mixed Methods Provide Unique Contributions to Outcomes Research." Circulation. 119:1442-1452. Creswell, J. and V. Piano Clark. 2007. Designing and Conducting Mixed Methods Research. Thousand Oaks, CA: Sage. Glaser, B. and A. Strauss (Eds.). 1967. The Discovery of Grounded Theory: Strategies for Qualitative Research. Chicago, IL: Aldine. Malterud, K. 2001. "The Art and Science of Clinical Knowledge: Evidence beyond Measures and Numbers." Lancet. 358:397-400. Pope, C., P. van Royen and R. Baker. 2002. "Qualitative Methods in Research on Healthcare Quality." Quality and Safety in Health Care. 11:148-152. Shortell, S. 1999. "The Emergence of Qualitative Methods in Health Services Research." Health Services Research. 34:1083-1090. 74 WORKS CITED Aaker, J. and J. Shumaker. 1994. Looking Back and Looking Forward: A Participatory Approach to Evaluation. Little Rock, AK: Heifer Project International. Aday, L. and R. Andersen. 1981. "Equity of Access to Medical Care: A Conceptual and Empirical Overview." Medical Care. 19(12 Supplement): 4-27. Aharony, L. and S. Strasser. 1993. "Patient Satisfaction: What We Know and What We Still Need to Explore." Medical Care Research and Review. 50(1): 49-79. Ahrari, M., A. Kuttab, S. Khamis, A. Farahat, G. Darmstadt, D. Marsh, and F. Levinson. 2002. "Factors Associated with Successful Pregnancy Outcomes in Upper Egypt: A Positive Deviance Inquiry." Food and Nutrition Bulletin. 23: 83-88. Andersen, R. and L. Aday. 1978. "Access to Medical Care in the US: Realized and Potential." Medical Care. 16(7): 533-46. Anderson, P. 1999. "Complexity Theory and Organization Science." Organization Science. 10(3): 216- 232. Ashmos, D. and G. Huber. 1987. "The Systems Paradigm in Organization Theory: Correcting the Record and Suggesting the Future." The Academy of Management Review. 12(4): 607-21. Auerbach, A., C. Landefeld, and K. Shojania. 2007. "The Tension between Needing to Improve Care and Knowing How to Do It." New England Journal of Medicine. 357:608-613. Axelrod, R. and M. Cohen. 2000. Harnessing Complexity: Organizational Implications of a Scientific Frontier. New York: The Free Press. Bernet, P., J. Moises, and V. G. Valdmanis. 2010. "Social Efficiency of Hospital Care Delivery: Frontier Analysis from the Consumers Perspective." Medical Care Research and Review. E-publication 6 May 2010. Berta, W. and R. Baker. 2004. "Factors that Impact the Transfer and Retention of Best Practices for Reducing Error in Hospitals." Health Care Management Review. 29:90-97. Berta, W., G. Teare, E. Gilbart, L. Soberman Ginsburg, L. Lemieux-Charles, D. Davis, and S. Rappolt. 2005. "The Contingencies of Organizational Learning in Long-Term Care: Factors That Affect Innovation Adoption." Health Care Management Review. 30(4): 282-92. Bradley, E., L. Curry, T. Webster, J. Mattera, S. Roumanis, M. Radford, et al. 2006. "Achieving Rapid Door-to-Balloon Times: How Top Hospitals Improve Complex Clinical Systems." Circulation. 113:1079-1085. Bradley, E., L. Curry, S. Ramanadhan, L. Rowe, I. Nembhard, and H. Krumholz. 2009. "Research in Action: Using Positive Deviance to Improve Quality of Health Care." Implementation Science. 4: 25. 75 Buljac-Samardzic, M., C. Dekker-van Doorn, J. van Wijngaarden, and K. van Wijk. 2010. "Interventions to Improve Team Effectiveness: A Systematic Review." Health Policy. 94: 183-95. Burns, L.R. 2002. The Health Care Value Chain. San Francisco: Jossey-Bass. Caves, R. 1998. "Industrial Organization and New Findings on the Turnover and Mobility of Firms." Journal of Economic Literature. 36(4): 1947-82. Centers for Disease Control and Prevention (CDC). 1999. "Framework for Program Evaluation in Public Health." MMWR. 48(RR11): 1-40. Chuang, Y-T., L. Ginsburg, and W.B. Berta. 2007. "Learning from Preventable Adverse Events in Health Care Organizations: Development of a Multilevel Model of Learning and Propositions. Health Care Management Review. 32(4):330-340. Cohen, M., J. March, and J. Olsen. 1972. "A Garbage Can Model of Organizational Choice. Administrative Science Quarterly. 17(1):1-25. Creswell, J. and V. Piano Clark. 2007. Designing and Conducting Mixed Methods Research. Thousand Oaks, CA: Sage. Curry, L., I. Nembhard, E. Bradley. 2009. "Qualitative and Mixed Methods Provide Unique Contributions to Outcomes Research." Circulation. 119: 1442-1452. Cyert, R. and J. March. 1963. A Behavioral Theory of the Firm. Englewood Cliffs, NJ: Prentice-Hall. Davies, H. and M. Nutley. 2000. "Developing Learning Organisations in the New NHS." British Medical Journal. 320: 998-1001. De Savigny, D. and T. Adam (Eds.). 2009. Systems Thinking for Health Systems Strengthening. Alliance for Health Policy and Systems Research. Geneva: WHO. Denzin, N. and Y. Lincoln. 2000. Handbook of Qualitative Research. 2nd edition. Thousand Oaks, CA: Sage. DiMaggio, P. and W. Powell. 1983. "The Iron Cage Revisited: Institutional Isomorphism and Collective Rationality in Organizational Fields." American Sociological Review. 48(2):147- 160. Donabedian, A. 1980. The Definition of Quality and Approaches to Its Assessment. Ann Arbor, MI: Health Administration Press. Drazin, R. and A. Van de Ven. 1985. "Alternative Forms of Fit in Contingency Theory." Administrative Science Quarterly. 30(4):514-539. Eccles, M., J. Grimshaw, M. Campbell, and C. Ramsay. 2003. "Research Designs for Studies Evaluating the Effectiveness of Change and Improvement Strategies." Quality and Safety in Health Care. 12:47-52. 76 Egger, D., P. Travis, D. Dovlo, and L. Hawken. 2005. Strengthening Management in Low-Income Countries. Working Paper No. 1, Making Health Systems Work. Geneva: World Health Organization. Emery, F. and E. Trist. 1965. "The Causal Texture of Organizational Environments." Human Relations. 18:21-32. Fayol, H. 1949 [1916]. General and Industrial Management. Trans. Constance Storrs. London: Pitman. Fiedler, J. 1981. "A Review of the Literature on Access and Utilization of Medical Care with Special Emphasis on Rural Primary Care." Social Science and Medicine. 15C: 129-42. Fink, Arlene. 2005. Evaluation Fundamentals: Insights in the Outcomes, Effectiveness, and Quality of Health Programs. 2nd edition. Thousand Oaks, CA: Sage Publications. Fisher, E., J. Wennberg, T. Stukel, J. Skinner, S. Sharp, J. Freeman, and A. Gittelsohn. 2000. "Associations Among Hospital Capacity, Utilization, and Mortality of U.S. Medicare Beneficiaries, Controlling for Sociodemographic Factors." Health Services Research. 34(6): 1351-62. Fishman, P., M. Hornbrook, R. Meenan, and M. Goodman. 2004. "Opportunities and Challenges for Measuring Cost, Quality, and Clinical Effectiveness in Health Care." Medical Care Research and Review. 61(3 Supplement):124S-43S. Flood, A. and M. Fennell. 1995. "Through the Lenses of Organizational Sociology: The Role of Organizational Theory and Research in Conceptualizing and Examining our Health Care System." Journal of Health and Social Behavior. 35(Extra Issue): 154-169. Folland, S., A. Goodman, and M. Stano. 2006. The Economics of Health and Health Care. 5th edition. Upper Saddle River, NJ: Prentice Hall. Ford, R., S. Bach, and M. Fottler. 1997. "Methods of Measuring Patient Satisfaction in Health Care Organizations." Health Care Management Review. 22(2): 74-89. Freeman, J. 1994. Participatory Evaluations: Making Projects Work. Dialogue on Development Technical Paper No. TP94/2. Calgary: The University of Calgary International Centre. French, J. and B. Raven. 1959. "The Bases of Social Power." In Cartwright, D. (Ed.). Studies in Social Power. Ann Arbor, MI: University of Michigan Press. Freud, S. 1923. The Ego and the Id. New York: W.W. Norton & Company. Gantt, H. 1919. Organizing for Work. New York: Harcourt, Brace, and Howe. Gilmartin, M., and T. D'Aunno. 2007. "Leadership Research in Health Care: A Review and Roadmap." Chapter 8 in The Academy of Management Annals. 1(1): 387-438. Gold, M. 1998. "Beyond Coverage and Supply: Measuring Access to Healthcare in Todays Market." Health Services Research. 33(3 Part II): 625-52. 77 Green, J. and N. Britten. 1998. "Qualitative Research and Evidence Based Medicine." BMJ. 316: 1230-1232. Green, L. and V. Nguyen. 2001. "Strategies for Cutting Hospital Beds: The Impact on Patient Service." Health Services Research. 36(2): 421-42. Greenhalgh, T. 2002. "Integrating Qualitative Research into Evidence Based Practice." Endocrinology Metabolism Clinics of North America. 31: 583-601, ix. Greenhalgh, T., G. Robert, F. Macfarlane, P. Bate, and O. Kyriakidou. 2004. "Diffusion of Innovations in Service Organizations: Systematic Review and Recommendations." Milbank Quarterly. 82: 581- 629. Gruen, R., J. Elliott, M. Nolan, P. Lawton, A. Parkhill, C. McLaren, and J. Lavis. 2008. "Sustainability Science: An Integrated Approach for Health-Programme Planning." Lancet. 372: 1579-89. Guba, E. and Y. Lincoln. 1989. Fourth Generation Evaluation. Newbury Park, CA: Sage Publications. Gulwick, L. and L. Urwick (Eds.). 1937. Papers on the Science of Administration. New York: Institute of Public Administration. Gustafson, D., F. Sainfort, M. Eichler, L. Adams, M. Bisognano, and H. Steudel. 2003. "Developing and Testing a Model to Predict Outcomes of Organizational Change." Health Services Research. 38(2): 751-76. Habicht, J., C. Victoria, and J. Vaughan. 1999. "Evaluation Designs for Adequacy, Plausibility, and Probability of Public Health Programme Performance and Impact." International Journal of Epidemiology. 28: 10-18. Hall, A. 1998. "Medicaids Impact on Access to and Utilization of Health Care Services Among Racial and Ethnic Minority Children." Journal of Urban Health. 75(4): 677-92. Hamilton, S., S. McLaren, and A. Mulhall. 2007. "Assessing Organisational Readiness for Change: Use of Diagnostic Analysis Prior to the Implementation of a Multidisciplinary Assessment of Acute Stroke Care." Implementation Science. 2: 21. Hollingsworth, B. 2008. "The Measurement of Efficiency and Productivity of Health Care Delivery." Health Economics. 17: 1107-28. IOM. 2001. Crossing the Quality Chasm: A New Health System for the 21st Century. Washington, DC: National Academy Press. Ishikawa, K. 1990. Introduction to Quality Control. 3rd edition. Loftus, J.H. (Trans.). Tokyo: 3A Corporation. Israel, B., E. Eng, A. Schulz, and E. Parker (Eds.). 2005. Methods in Community-Based Participatory Research for Health. San Francisco, CA: Jossey-Bass. Jick T. 1979. "Mixing Qualitative and Quantitative Methods: Triangulation in Action." Administrative Science Quarterly. 602-610. 78 Joumard, I., C. Andre, and C. Nicq. 2010. Health Care Systems: Efficiency and Institutions. Economics Department Working Papers No. 769. Paris: OECD. Kast, F. and J. Rosenzweig. 1972. "General Systems Theory: Applications for Organization and Management." The Academy of Management Journal. 15(4): 447-65. Katz, D. and R. Kahn. 1966. The Social Psychology of Organizations. New York: Wiley. Keeler, T. and J. Ying. 1996. "Hospital Costs and Excess Bed Capacity: A Statistical Analysis." The Review of Economics and Statistics. 78(3): 470-81. Keroack, M., B. Youngberg, J. Cerese, C. Krsek, L. Prellwitz, E. Trevelyan. 2007. "Organizational Factors Associated with High Performance in Quality and Safety in Academic Medical Centers." Academic Medicine. 82:1178­1186. Kissick, W. 1994. Medicine's Dilemmas: Infinite Needs versus Finite Resources. New Haven: Yale University Press. Klein, K., J. Ziegert, A. Knight, and Y. Xiao. 2006. "Dynamic Delegation: Shared, Hierarchical, and Deindividualized Leadership in Extreme Action Teams." Administrative Science Quarterly. 51:590-621. Landsberger, H. 1958. Hawthorne Revisited. New York: Cornell University Press. Latino, R. and K. Latino. 2006. Root Cause Analysis: Improving Performance for Bottom Line Results. 3rd edition. Boca Raton, FL: Taylor and Francis. Lawrence, P. and J. Lorsch. 1967. "Differentiation and Integration in Complex Organizations." Administrative Science Quarterly. 12(1):1-47. Leibenstein, H. 1976. Beyond Economic Man. Cambridge, MA: Cambridge University Press. Lemieux-Charles, L. and W. McGuire. 2006. "What Do We Know about Health Care Team Effectiveness?: A Review of the Literature." Medical Care Research and Review. 63:263- 300. Levitt, B. and J.G. March. 1988. "Organizational Learning." Annual Review of Sociology. 14(3): 319- 40. Lewin, K. 1935. A Dynamic Theory of Personality. New York: McGraw-Hill. Lewin, K. 1936. Principles of Topological Psychology. New York: McGraw-Hill. Lukas, C., S. Holmes, A. Cohen, J. Restuccia, I. Cramer, M. Shwartz, et al. 2007. "Transformational Change in Health Care Systems: An Organizational Model." Health Care Management Review. 32: 309-320. March, J. 1978. "Bounded Rationality, Ambiguity, and the Engineering of Choice." The Bell Journal of Economics. 9(2): 587-608. 79 March, J. 1991. "Exploration and Exploitation in Organizational Learning." Organization Science, 2(1): 71-87. Marsh, D., H. Pachon, D. Schroeder, T. Ha, K. Dearden, T. Lang, N. Hien, D. Tuan, T. Thach, D. Claussenius. 2002. "Design of a Prospective, Randomized Evaluation of an Integrated Nutrition Program in Rural Viet Nam." Food and Nutrition Bulletin. 23:36-47. Marsh, D., D. Schroeder, K. Dearden, J. Sternin, and M. Sternin. 2004. "The Power of Positive Deviance." BMJ. 329:1177-1179. Meyer, J. and B. Rowan. 1977. "Institutionalized Organizations: Formal Structure as Myth and Ceremony." American Journal of Sociology. 83(2):340-363. Mills, P. and W. Weeks. 2004. "Characteristics of Successful Quality Improvement Teams: Lessons from Five Collaborative Projects in the VHA." Joint Commission Journal on Quality and Patient Safety. 30: 152-162. Minkler, M. and N. Wallerstein (Eds.). 2002. Community-Based Participatory Research for Health. San Francisco, CA: Jossey-Bass. Mintzberg, H. 1983. Power In and Around Organizations. Englewood Cliffs, NJ: Prentice Hall. Mooney, J.D. 1947. The Principles of Organization. New York: Harper and Brothers. Moss, F. and P. Garside. 1995. "The Importance of Quality: Sharing the Responsibility for Patient Care." British Medical Journal. 310(6985): 996-9. Munsterberg, H. 1913. Psychology and Industrial Efficiency. Boston: Houghton Mifflin Company. Murphy, M. and C. Noetscher. 1999. "Reducing Hospital Inpatient Lengths of Stay." Journal of Nursing Care Quality. 14(1 Special Issue 1): 40-54. Negrini, D., A. Kettle, L. Sheppard, G.H. Mills, D.L. Edbrooke. 2004. "The Cost of a Hospital Ward in Europe: Is There a Methodology Available to Accurately Measure the Costs?" Journal of Health Organization and Management. 18(2/3): 195-206. Nelson, E., P. Batalden, T. Huber, J. Mohr, M. Godfrey, L. Headrick, et al. 2002. "Microsystems in Health Care: Part 1. Learning from High-Performing Front-line Clinical Units." Joint Commission Journal on Quality and Patient Safety. 28:472-493. Nembhard, I. 2009. "Learning and Improving in Quality Improvement Collaboratives: Which Collaborative Features Do Participants Value Most?" Health Services Research. 44(2 Part I): 359- 78. Newhouse, J. 1970. "Toward a Theory of Nonprofit Institutions: An Economic Model of a Hospital." The American Economic Review. 60(1): 64-74. Newhouse, J. 1994. "Frontier Estimation: How Useful a Tool for Health Economics?" Journal of Health Economics. 13: 317-22. 80 Nohria, N. 1998. "Is a Network Perspective a Useful Way of Studying Organizations?" In Hickman, G.R. (Ed.). Leading Organizations: Perspectives for a New Era. Thousand Oaks, CA: Sage Publications. Olsen, I. 1998. "Sustainability of Health Care: A Framework for Analysis." Health Policy and Planning. 13(3): 287-95. OMahony, S., J. McHenry, D. Snow, C. Cassin, D. Schumacher, and P. Selwyn. 2008. "A Review of Barriers to Utilization of the Medicare Hospice Benefits in Urban Populations and Strategies for Enhanced Access." Journal of Urban Health. 85(2): 281-90. Ovretveit, J., T. Scott, T. Rundall, S. Shortell, and M. Brommels. 2007. "Implementation of Electronic Medical Records in Hospitals: Two Case Studies." Health Policy. 84: 181-90. Parsons, T. 1951. The Social System. Glencoe, IL: The Free Press. Patton, M. 2002. Qualitative Research and Evaluation Methods. 3rd edition. Thousand Oaks, CA: Sage. Pauly, M., and P. Wilson. 1986. "The Cost of Empty Hospital Beds." Health Services Research. 21(3): 403-28. Pawson, R. and N. Tilley. 1997. Realistic Evaluation, London: Sage Publications Ltd. Penchansky, R. and J.W. Thomas. 1981. "The Concept of Access: Definition and Relationship to Consumer Satisfaction." Medical Care. 19(2): 127-40. Peters, D., S. El-Saharty, B. Siadat, K. Janovsky, and M. Vujicic (Eds.). 2009. Improving Health Service Delivery in Developing Countries: From Evidence to Action. Washington, DC: The World Bank. Peters, D., A. Garg, G. Bloom, D. Walker, W. Brieger, M. Hafizur Rahman. 2008. "Poverty and Access to Health Care in Developing Countries." Annals of the New York Academy of Sciences. 1136: 161- 71. Peterson, M. 1993. "Political Influence in the 1990s: From Iron Triangles to Policy Networks." Journal of Health Politics, Policy and Law. 18(2): 395-438. Pfeffer, J. 1981. Power in Organizations. Cambridge: Ballinger Publishing Company. Pfeffer, J. and G. Salancik. 1978. The External Control of Organizations: A Resource Dependence Perspective. New York: Harper & Row. Pindyck, R. and D. Rubinfeld. 2000. Microeconomics. 5th edition. Upper Saddle River, NJ: Prentice Hall. Pisano, G., R. Bohmer, and A. Edmondson. 2001. "Organizational Differences in Rates of Learning: Evidence from the Adoption of Minimally Invasive Cardiac Surgery." Management Science. 47(6): 752-68. Pluye, P., L. Potvin, and J.L. Denis. 2004. "Making Public Health Programs Last: Conceptualizing Sustainability." Evaluation and Program Planning. 27(2): 121-33. 81 Pope, C., P. van Royen, and R. Baker. 2002. "Qualitative Methods in Research on Healthcare Quality." Quality and Safety in Health Care. 11: 148-152. Porter, M. and E. Teisberg. 2006. Redefining Health Care: Creating Value-Based Competition on Results. Boston: Harvard Business School Press. Positive Deviance Initiative. "Projects." http://www.positivedeviance.org/projects/ Reinhardt, U. 1998. "Quality in Consumer-Driven Health Systems." International Journal for Quality in Health Care. 10(5): 385-94. Roberts, M., W. Hsiao, P. Berman, M. Reich. 2004. Getting Health Reform Right: A Guide to Improving Performance and Equity. New York: Oxford University Press. Rosko, M. and R. Mutter. 2008. "Stochastic Frontier Analysis of Hospital Inefficiency: A Review of Empirical Issues and an Assessment of Robustness." Medical Care Research and Review. 65(2): 131-66. Rosko, M. and R. Mutter. 2010. "What Have We Learned From the Application of Stochastic Frontier Analysis to U.S. Hospitals?" Medical Care Research and Review. E-publication 2 June 2010. Rubenstein, L., L. Parker, L. Meredith, A. Altschuler, E. dePillis, J. Hernandez, et al. 2002. "Understanding Team-Based Quality Improvement for Depression in Primary Care." Health Services Research. 37:1009-1029. Safran, D. 2003. "Defining the Future of Primary Care: What Can We Learn from Patients?" Annals of Internal Medicine. 138(3): 248-55. Safran, D., M. Karp, K. Coltin, H. Chang, A. Li, J. Ogren, and W. Rogers. 2006. "Measuring Patients Experiences with Individual Primary Care Physicians: Results of a Statewide Demonstration Project." Journal of General Internal Medicine. 21(1): 13-21. Sarriot, E., P. Winch, L. Ryan., J. Bowie, M. Kouletio, E. Swedberg, K. LeBan, J. Edison, R. Welch, and M. Pacque. 2004. "A Methodological Approach and Framework for Sustainability Assessment in NGO-Implemented Primary Health Care Programs." International Journal of Health Planning and Management. 19: 23-41. Schuster, M., E. McGlynn, and R. Brook. 1998. "How Good is the Quality of Health Care in the United States?" The Milbank Quarterly. 76(4):517­63. Schwarz, N. and S. Sudman (Eds.). 1996. Answering Questions: Methodology for Determining Cognitive and Communicative Processes in Survey Research. San Francisco: Jossey-Bass, Inc. Scott, W. 1961. "Organization Theory: An Overview and Appraisal." The Journal of the Academy of Management . 4(1): 7-26. 82 Scott, W. 2004. "Reflections on a Half-Century of Organizational Sociology." Annual Review of Sociology. 30:1-21. Scott W. and G. Davis. 2006. Organizations and Organizing: Rational, Natural, and Open Systems. Upper Saddle River, NJ: Prentice Hall. Senge, P. 1990. The Fifth Discipline: The Art and Practice of the Learning Organization. New York: Doubleday. Shediac-Rizkallah, M. and L. Bone. 1998. "Planning for Sustainability of Community-Based Health Programs: Conceptual Frameworks and Future Directions for Research, Practice, and Policy." Health Education Research. 13(1): 87-108. Sherman, H.D. 1984. "Hospital Efficiency Measurement and Evaluation: Empirical Test of a New Technique." Medical Care. 22(10): 922-38. Shortell, S. 1999. "The Emergence of Qualitative Methods in Health Services Research." Health Services Research. 34: 1083-1090. Shortell, S. and T. Rundall. 2003. "Physician-Organization Relationships: Social Organizations and Strategic Intent." In Mick, S. and M. Wyttenbach (Eds.). Advances in Health Care Organization Theory. San Francisco: Jossey-Bass. Shortell, S. 2004. "Increasing Value: A Research Agenda for Addressing the Managerial and Organizational Challenges Facing Health Care Delivery in the United States." Medical Care Research and Review. 61(3 Supplement): 12S-30S. Shortell, S., J. Marsteller, M. Lin, M. Pearson, S. Wu, P. Mendel, et al. 2004. "The Role of Perceived Team Effectiveness in Improving Chronic Illness Care." Medical Care. 42: 1040-1048. Skinner, B.F. 1953. Science and Human Behavior. Upper Saddle River, NJ: Pearson Education, Inc. Smith, K. and D. Berg. 1987. Paradoxes of Group Life. San Francisco: Jossey-Bass, Inc. Sofaer S. 1999. "Qualitative Methods: What Are They and Why Use Them?" Health Services Research. 34:1101-1118. Sternin, J. and R. Choo. 2000. "The Power of Positive Deviancy. An effort to reduce malnutrition in Vietnam offers an important lesson about managing change." Harvard Business Review. 78:14-15. Sternin, M., J. Sternin, and D. Marsh. 1999. "Scaling up a Poverty Alleviation and Nutrition Program in Viet Nam." In Marchione, T. (Ed.). Scaling Up, Scaling Down: Capacities for Overcoming Malnutrition in Developing Countries. Amsterdam: Gordon and Beach. Stetler, C., J. Ritchie, J. Rycroft-Malone, A. Schultz, and M. Charns. 2009. "Institutionalizing Evidence-Based Practice: An Organizational Case Study Using a Model of Strategic Change." Implementation Science. 4:78. 83 Sudman, S., N. Bradburn, and N. Schwarz. 1996. Thinking About Answers: The Application of Cognitive Processes to Survey Methodology. San Francisco: Jossey-Bass Publishers. Susman, G. 1983. Action Research: Sociotechnical Systems Perspective. London: Sage Publications. Taylor, F. 1916. "The Principles of Scientific Management." Reprinted in Shafritz, J., J.S. Ott, and Y.S. Jang. 2005. Classics of Organization Theory. 6th edition. Boston: Thomson Wadsworth. Tucker, A. and A. Edmondson. 2003. "Why Hospitals Dont Learn from Failure: Organizational and Psychological Dynamics that Inhibit System Change." California Management Review. 45(2):55-72. Varian, H. 1992. Microeconomic Analysis. 3rd edition. New York: W.W. Norton & Company. Van de Ven, A. 1995. "Explaining Development and Change in Organizations." Academy of Management Review. 20:510-540. Victora, C., A. Wagstaff, J. Armstrong Schellenberg, D. Gwatkin, M. Claeson, and J.P. Habicht. 2003. "Applying an Equity Lens to Child Health and Mortality: More of the Same is Not Enough." Lancet. 362: 233-41. Vitaliano, D. and M. Toren. 1994. "Cost and Efficiency in Nursing Homes: A Stochastic Frontier Approach." Journal of Health Economics. 13: 281-300. Vitikainen, K., A. Street, and M. Linna. 2009. "Estimation of Hospital Efficiency: Do Different Definitions and Casemix Measures for Hospital Output Affect the Results?" Health Policy. 89: 149- 59. Waitzkin, H. 2000. The Second Sickness. Lanham, MD: Rowman & Littlefield Publishers, Inc. Waitzkin, H. 2001. At the Front Lines of Medicine: How the Health Care System Alienates Doctors and Mistreats Patients...and What We Can Do about It. Lanham, MD: Rowman & Littlefield Publishers, Inc. Walker, L., B. Sterling, M. Hoke, and K. Dearden. 2007. "Applying the Concept of Positive Deviance to Public Health Data: A Tool for Reducing Health Disparities." Public Health Nursing. 24:571-576. Weber, M. 1925. Economy and Society. English translation. Roth, G. and C. Wittich (Eds.). 1978. Berkeley: University of California Press. Weick, K. E. 1976. "Educational Organizations as Loosely Coupled Systems." Administrative Science Quarterly, 21(1): 1-19. Weiner, B., H. Amick, and S.D. Lee. 2008. "Review: Conceptuali zation and Measurement of Organizational Readiness for Change: A Review of the Literature in Health Services Research and Other Fields." Medical Care Research and Review. 65(4): 379-436. Wennberg, J., J. Freeman, and W. Culp. 1987. "Are Hospital Services Rationed in New Haven or Over- Utilized in Boston?" Lancet. 1(8543): 1185-89. World Bank. 2009. "Country Eligibility for Borrowing from the World Bank - 1 July 2009." The World Bank Annual Report 2009: Year in Review. Washington, DC: The World Bank. 84 Yuan, C., I. Nembhard, A. Stern, et al. 2010. Blueprint for the Dissemination of Evidence-Based Practices in Health Care. The Commonwealth Fund. Yukl, G. 1989. "Managerial Leadership: A Review of Theory and Research." Journal of Management. 15(2): 251-89. Zinn, J. and Flood, A. 2009. "Commentary: Slack Resources in Health Care Organizations: Fat to be Trimmed or Muscle to be Exercised?" Health Services Research. 44(3):812-820. Zuckerman, S., J. Hadley, and L. Iezzoni. 1994. "Measuring Hospital Efficiency with Frontier Cost Functions." Journal of Health Economics. 13: 255- 85 About this series... This series is produced by the Health, Nutrition, and Population Family (HNP) of the World Bank's Human Development Network. The papers in this series aim to provide a vehicle for publishing preliminary and unpolished results on HNP topics to encourage discussion and debate. The findings, interpretations, and conclusions expressed in this paper are entirely those of the author(s) and should not be attributed in any manner to the World Bank, to its affiliated organizations or to members of its Board of Executive Directors or the countries they represent. Citation and the use of material presented in this series should take into account this provisional character. For free copies of papers in this series please contact the individual authors whose name appears on the paper. Enquiries about the series and submissions should be made directly to the Editor Homira Nassery (hnassery@worldbank.org) or HNP Advisory Service (healthpop@worldbank.org, tel 202 473-2256, fax 202 522-3234). For more information, see also www.worldbank.org/ hnppublications. THE WORLD BANK 1818 H Street, NW Washington, DC USA 20433 Telephone: 202 473 1000 Facsimile: 202 477 6391 Internet: www.worldbank.org E-mail: feedback@worldbank.org