Program Evaluation and Performance Measurement An Introduction to Practice 3rd edition by James McDavid, Irene Huse, Laura Ingleson – Ebook PDF Instant Download/Delivery: 1506337066 , 978-1506337067
Full download Program Evaluation and Performance Measurement An Introduction to Practice 3rd edition after payment

Product details:
ISBN 10: 1506337066
ISBN 13: 978-1506337067
Author: James McDavid, Irene Huse, Laura Ingleson
Program Evaluation and Performance Measurement offers a conceptual and practical introduction to program evaluation and performance measurement for public and non-profit organizations. James C. McDavid, Irene Huse, and Laura R.L. Hawthorn discuss topics in a detailed fashion, making it a useful guide for practitioners who are constructing and implementing performance measurement systems, as well as for students. Woven into the chapters is the performance management cycle in organizations, which includes: strategic planning and resource allocation; program and policy design; implementation and management; and the assessment and reporting of results.
The Third Edition has been revised to highlight and integrate the current economic, political, and socio-demographic context within which evaluators are expected to work, and includes new exemplars including the evaluation of body-worn police cameras.
Program Evaluation and Performance Measurement An Introduction to Practice 3rd Table of contents:
Chapter 1 • Key Concepts and Issues in Program Evaluation and Performance Management
Introduction
Integrating Program Evaluation and Performance Measurement
Connecting Evaluation to the Performance Management System
The Performance Management Cycle
Policies and Programs
Key Concepts in Program Evaluation
Causality in Program Evaluations
Formative and Summative Evaluations
Ex Ante and Ex Post Evaluations
The Importance of Professional Judgment in Evaluations
Example: Evaluating a Police Body-Worn Camera Program in Rialto, California
The Context: Growing Concerns With Police Use of Force and Community Relationship
Implementing and Evaluating the Effects of Body-Worn Cameras in the Rialto Police Department
Program Success Versus Understanding the Cause-and-Effect Linkages: The Challenge of Unpacking the Body-Worn Police Cameras “Black Box”
Connecting Body-Worn Camera Evaluations to This Book
Ten Key Evaluation Questions
The Steps in Conducting a Program Evaluation
General Steps in Conducting a Program Evaluation
Assessing the Feasibility of the Evaluation
Doing the Evaluation
Making Changes Based on the Evaluation
Summary
Discussion Questions
References
Chapter 2 • Understanding and Applying Program Logic Models
Introduction
Logic Models and the Open Systems Approach
A Basic Logic Modeling Approach
An Example of the Most Basic Type of Logic Model
Working With Uncertainty
Problems as Simple, Complicated, and Complex
Interventions as Simple, Complicated, or Complex
The Practical Challenges of Using Complexity Theory in Program Evaluations
Program Objectives and Program Alignment With Government Goals
Specifying Program Objectives
Alignment of Program Objectives With Government and Organizational Goals
Program Theories and Program Logics
Systematic Reviews
Contextual Factors
Realist Evaluation
Putting Program Theory Into Perspective: Theory-Driven Evaluations and Evaluation Practice
Logic Models That Categorize and Specify Intended Causal Linkages
Constructing A Logic Model For Program Evaluations
Logic Models For Performance Measurement
Strengths and Limitations of Logic Models
Logic Models in a Turbulent World
Summary
Discussion Questions
Appendices
Appendix A: Applying What You Have Learned: Development of a Logic Model for a Meals on Wheels Program
Translating a Written Description of a Meals on Wheels Program Into a Program Logic Model
Appendix B: A Complex Logic Model Describing Primary Health Care in Canada
Appendix C: Logic Model for the Canadian Evaluation Society Credentialed Evaluator Program
References
Chapter 3 • Research Designs For Program Evaluations
Introduction
Our Stance
What Is Research Design?
The Origins of Experimental Design
Why Pay Attention to Experimental Designs?
Using Experimental Designs to Evaluate Programs
The Perry Preschool Study
Limitations of the Perry Preschool Study
The Perry Preschool Study in Perspective
Defining and Working With the Four Basic Kinds of Threats to Validity
Statistical Conclusions Validity
Internal Validity
Police Body-Worn Cameras: Randomized Controlled Trials and Quasi-Experiments
Construct Validity
The ‘Measurement Validity’ Component of Construct Validity
Other Construct Validity Problems
External Validity
Quasi-experimental Designs: Navigating Threats to Internal Validity
The York Neighborhood Watch Program: An Example of an Interrupted Time Series Research Design Where the Program Starts, Stops, and Then Starts Again
Findings and Conclusions From the Neighborhood Watch Evaluation
Non-Experimental Designs
Testing the Causal Linkages in Program Logic Models
Research Designs and Performance Measurement
Summary
Discussion Questions
Appendices
Appendix 3A: Basic Statistical Tools for Program Evaluation
Appendix 3B: Empirical Causal Model for the Perry Preschool Study
Appendix 3C: Estimating the Incremental Impact of a Policy Change—Implementing and Evaluating an Admission Fee Policy in the Royal British Columbia Museum
References
Chapter 4 • Measurement for Program Evaluation and Performance Monitoring
Introduction
Introducing Reliability and Validity of Measures
Understanding the Reliability of Measures
Understanding Measurement Validity
Types of Measurement Validity
Ways to Assess Measurement Validity
Validity Types That Relate a Single Measure to a Corresponding Construct
Validity Types That Relate Multiple Measures to One Construct
Validity Types That Relate Multiple Measures to Multiple Constructs
Units of Analysis and Levels of Measurement
Nominal Level of Measurement
Ordinal Level of Measurement
Interval and Ratio Levels of Measurement
Sources of Data in Program Evaluations and Performance Measurement Systems
Existing Sources of Data
Sources of Data Collected by the Program Evaluator
Surveys as an Evaluator-Initiated Data Source in Evaluations
Working With Likert Statements in Surveys
Designing and Conducting Surveys
Structuring Survey Instruments: Design Considerations
Using Surveys to Estimate the Incremental Effects of Programs
Addressing Challenges of Personal Recall
Retrospective Pre-tests: Where Measurement Intersects With Research Design
Survey Designs Are Not Research Designs
Validity of Measures and the Validity of Causes and Effects
Summary
Discussion Questions
References
Chapter 5 • Applying Qualitative Evaluation Methods
Introduction
Comparing and Contrasting Different Approaches To Qualitative Evaluation
Understanding Paradigms and Their Relevance to Evaluation
Pragmatism as a Response to the Philosophical Divisions Among Evaluators
Alternative Criteria for Assessing Qualitative Research and Evaluations
Qualitative Evaluation Designs: Some Basics
Appropriate Applications for Qualitative Evaluation Approaches
Comparing and Contrasting Qualitative and Quantitative Evaluation Approaches
Designing and Conducting Qualitative Program Evaluations
1. Clarifying the Evaluation Purpose and Questions
2. Identifying Research Designs and Appropriate Comparisons
Within-Case Analysis
Between-Case Analysis
3. Mixed-Methods Evaluation Designs
4. Identifying Appropriate Sampling Strategies in Qualitative Evaluations
5. Collecting and Coding Qualitative Data
Structuring Data Collection Instruments
Conducting Qualitative Interviews
6. Analyzing Qualitative Data
7. Reporting Qualitative Results
Assessing the Credibility and Generalizability of Qualitative Findings
Connecting Qualitative Evaluation Methods to Performance Measurement
The Power of Case Studies
Summary
Discussion Questions
References
Chapter 6 • Needs Assessments for Program Development and Adjustment
Introduction
General Considerations Regarding Needs Assessments
What Are Needs and Why Do We Conduct Needs Assessments?
Group-Level Focus for Needs Assessments
How Needs Assessments Fit Into the Performance Management Cycle
Recent Trends and Developments in Needs Assessments
Perspectives on Needs
A Note on the Politics of Needs Assessment
Steps in Conducting Needs Assessments
Phase I: Pre-Assessment
1. Focusing the Needs Assessment
2. Forming the Needs Assessment Committee (NAC)
3. Learning as Much as We Can About Preliminary “What Should Be” and “What Is” Conditions From Available Sources
4. Moving to Phase II and/or III or Stopping
Phase II: The Needs Assessment
5. Conducting a Full Assessment About “What Should Be” and “What Is”
6. Needs Assessment Methods Where More Knowledge Is Needed: Identifying the Discrepancies
7. Prioritizing the Needs to Be Addressed
8. Causal Analysis of Needs
9. Identification of Solutions: Preparing a Document That Integrates Evidence and Recommendations
10. Moving to Phase III or Stopping
Phase III: Post-Assessment: Implementing a Needs Assessment
11. Making Decisions to Resolve Needs and Select Solutions
12. Developing Action Plans
13. Implementing, Monitoring and Evaluating
Needs Assessment Example: Community Health Needs Assessment in New Brunswick
The Needs Assessment Process
Focusing the Needs Assessment
Forming the Needs Assessment Committee
Learning About the Community Through a Quantitative Data Review
Learning About Key Issues in the Community Through Qualitative Interviews and Focus Groups
Triangulating the Qualitative and Quantitative Lines of Evidence
prioritizing Primary Health-Related Issues in the Community
Summary
Discussion Questions
Appendixes
Appendix A: Case Study: Designing a Needs Assessment for a Small Nonprofit Organization
The Program
Your Role
Your Task
References
Chapter 7 • Concepts and Issues in Economic Evaluation
Introduction
Why an Evaluator Needs to Know About Economic Evaluation
Connecting Economic Evaluation With Program Evaluation: Program Complexity and Outcome Attribution
Program Complexity and Determining Cost-Effectiveness of Program Success
The Attribution Issue
Three Types of Economic Evaluation
The Choice of Economic Evaluation Method
Economic Evaluation in the Performance Management Cycle
Historical Developments in Economic Evaluation
Cost–Benefit Analysis
Standing
Valuing Nonmarket Impacts
Revealed and Stated Preferences Methods for Valuing Nonmarket Impacts
Steps for Economic Evaluations
1. Specify the Set of Alternatives
2. Decide Whose Benefits and Costs Count (Standing)
3. Categorize and Catalog the Costs and Benefits
4. Predict Costs and Benefits Quantitatively Over the Life of the Project
5. Monetize (Attach Dollar Values to) All Costs and Benefits
6. Select a Discount Rate for Costs and Benefits Occurring in the Future
7. Compare Costs With Outcomes, or Compute the Net Present Value of Each Alternative
8. Perform Sensitivity and Distributional Analysis
9. Make a Recommendation
Cost–Effectiveness Analysis
Cost–Utility Analysis
Cost–Benefit Analysis Example: The High/Scope Perry Preschool Program
1. Specify the Set of Alternatives
2. Decide Whose Benefits and Costs Count (Standing)
3. Categorize and Catalog Costs and Benefits
4. Predict Costs and Benefits Quantitatively Over the Life of the Project
5. Monetize (Attach Dollar Values to) All Costs and Benefits
6. Select a Discount Rate for Costs and Benefits Occurring in the Future
7. Compute the Net Present Value of the Program
8. Perform Sensitivity and Distributional Analysis
9. Make a Recommendation
Strengths and Limitations of Economic Evaluation
Strengths of Economic Evaluation
Limitations of Economic Evaluation
Summary
Discussion Questions
References
Chapter 8 • Performance Measurement as an Approach to Evaluation
Introduction
The Current Imperative To Measure Performance
Performance Measurement For Accountability and Performance Improvement
Growth and Evolution of Performance Measurement
Performance Measurement Beginnings in Local Government
Federal Performance Budgeting Reform
The Emergence of New Public Management
Steering, Control, and Performance Improvement
Metaphors That Support and Sustain Performance Measurement
Organizations as Machines
Government as a Business
Organizations as Open Systems
Comparing Program Evaluation and Performance Measurement Systems
Summary
Discussion Questions
References
Chapter 9 • Design and Implementation of Performance Measurement Systems
Introduction
The Technical/Rational View and the Political/Cultural View
Key Steps in Designing and Implementing a Performance Measurement System
1. Leadership: Identify the Organizational Champions of This Change
2. Understand What Performance Measurement Systems Can and Cannot Do
3. Communication: Establish Multi-Channel Ways of Communicating That Facilitate Top-Down, Bottom-Up, and Horizontal Sharing of Information, Problem Identification, and Problem Solving
4. Clarify the Expectations for the Intended Uses of the Performance Information That is Created
5. Identify the Resources and Plan for the Design, Implementation, and Maintenance of the Performance Measurement System
6. Take the Time to Understand the Organizational History Around Similar Initiatives
7. Develop Logic Models for the Programs for Which Performance Measures Are Being Designed and Identify the Key Constructs to Be Measured
8. Identify Constructs Beyond Those in Single Programs: Consider Programs Within Their Place in the Organizational Structure
9. Involve Prospective Users in Development of Logic Models and Constructs in the Proposed Performance Measurement System
10. Translate the Constructs Into Observable Performance Measures that Compose the Performance Measurement System
11. Highlight the Comparisons That Can Be Part of the Performance Measurement System
12. Reporting and Making Changes to the Performance Measurement System
Performance Measurement for Public Accountability
Summary
Discussion Questions
Appendix A: Organizational Logic Models
References
Chapter 10 • Using Performance Measurement for Accountability and Performance Improvement
Introduction
Using Performance Measures
Performance Measurement in a High-Stakes Environment: The British Experience
Assessing the “Naming and Shaming” Approach to Performance Management in Britain
A Case Study of Gaming: Distorting the Output of a Coal Mine
Performance Measurement in a Medium-Stakes Environment: Legislator Expected Versus Actual Uses of Performance Reports in British Columbia, Canada
The Role of Incentives and Organizational Politics in Performance Measurement Systems With a Public Reporting Emphasis
Performance Measurement in a Low-Stakes Environment: Joining Internal and External Uses of Performance Information in Lethbridge, Alberta
Rebalancing Accountability-Focused Performance Measurement Systems to Increase Performance Improvement Uses
Making Changes to a Performance Measurement System
Does Performance Measurement Give Managers the “Freedom to Manage?”
Decentralized Performance Measurement: The Case of a Finnish Local Government
When Performance Measurement Systems De-Emphasize Outputs and Outcomes: Performance Management Under Conditions of Chronic Fiscal Restraint
Summary
Discussion Questions
References
Chapter 11 • Program Evaluation and Program Management
Introduction
Internal Evaluation: Views From the Field
Intended Evaluation Purposes and Managerial Involvement
When the Evaluations Are for Formative Purposes
When the Evaluations Are for Summative Purposes
Optimizing Internal Evaluation: Leadership and Independence
Who Leads the Internal Evaluation?
“Independence” for Evaluators
Building an Evaluative Culture in Organizations: An Expanded Role for Evaluators
Creating Ongoing Streams of Evaluative Knowledge
Critical Challenges to Building and Sustaining an Evaluative Culture
Building an Evaluative/Learning Culture in a Finnish Local Government: Joining Performance Measurement and Performance Management
Striving for Objectivity in Program Evaluations
Can Program Evaluators Claim Objectivity?
Objectivity and Replicability
Implications for Evaluation Practice: A Police Body-Worn Cameras Example
Criteria for High-Quality Evaluations
Summary
Discussion Questions
References
Chapter 12 • The Nature and Practice of Professional Judgment in Evaluation
Introduction
The Nature of the Evaluation Enterprise
Our Stance
Reconciling the Diversity in Evaluation Theory With Evaluation Practice
Working in the Swamp: The Real World of Evaluation Practice
Ethical Foundations of Evaluation Practice
Power Relationships and Ethical Practice
Ethical Guidelines for Evaluation Practice
Evaluation Association-Based Ethical Guidelines
Understanding Professional Judgment
What Is Good Evaluation Theory and Practice?
Tacit Knowledge
Balancing Theoretical and Practical Knowledge in Professional Practice
Aspects of Professional Judgment
The Professional Judgment Process: A Model
The Decision Environment
Values, Beliefs, and Expectations
Cultural Competence in Evaluation Practice
Improving Professional Judgment in Evaluation
Mindfulness and Reflective Practice
Professional Judgment and Evaluation Competencies
Education and Training-Related Activities
Teamwork and Improving Professional Judgment
The Prospects for an Evaluation Profession
Summary
Discussion Questions
Appendix
Appendix A: Fiona’s Choice: An Ethical Dilemma for a Program Evaluator
Your Task
References
Glossary
Index
People also search for Program Evaluation and Performance Measurement An Introduction to Practice 3rd:
program evaluation process
cdc program performance and evaluation office
program evaluation activities
program evaluation and performance measurement
program evaluation and performance measurement an introduction to practice
Tags: James McDavid, Irene Huse, Laura Ingleson, Program Evaluation, Performance Measurement


