to Syllabi List
Syllabus: Health Education 6100
Program Evaluation (3 hours)
Fall Semester 2003
and Location: Thursday, 4:35 pm to 7:00 pm - HPR N. 226
Karol L. Kumpfer, Ph.D.,
Associate Professor of Health Education
Annex/2130B - Class Office Hours Thursday 3:30 - 4:30pm or any other time by appointment
8:00 am to 6:00 pm
Office Telephone Number: 581-7718
The purpose of this course is to teach graduate students the professional
skills needed to either supervise and work with professional evaluators or to
actually design and implement program evaluations. To do this, they will learn
to develop evaluation logic models, design program evaluations, select and develop
evaluation instruments or measures, collect and analyze data, and interpret and
disseminate program results.
The objectives of the course are to:
1. Assemble, review,
and evaluate the usefulness of evaluation resource materials needed for professional
work in program evaluation,
2. Write or help
program designers write appropriate and measurable program goals and objectives,
3. Develop analytic
evaluation "logic models@ linking resources, program activities, and intermediate
and ultimate program objectives as a first step in designing program evaluations,
4. Review and select
the most appropriate process and outcome evaluation designs within the scope of
the evaluation efforts, cost, and resources available,
5. Select, modify,
or develop the most appropriate instruments and data collection strategies for
each client change objective from a personally-developed measurement library,
6. Select and implement
appropriate data collection, data reduction, and analysis strategies,
7. Interpret data
and effectively write evaluation reports that translate evaluation results into
terms understood by the program staff and funding sources, and
8. Highlight important
evaluation findings and recommend effective dissemination strategies.
E.J. Posavac and
R.G. Carey, Program Evaluation--Methods and Case Studies, Prentice Hall, 1996
Kumpfer, K. L.
Program Evaluation: Instructor's Manual. This contains the instructor's overheads
from lectures and other suggested reading and resource materials.
Kumpfer, K. L., Shur, G. H., Ross, J. G., Bunnell, K. K., Librett, J. J., &
Millward, A. R. (1993). Measurements in Prevention: A Manual on Selecting and
Using Instruments to Evaluate Prevention Programs. Center for Substance Abuse
Prevention. Washington, DC: U. S. Government Printing Office.
Linney, J A
& Wandersman, A. (1991). Prevention Plus III Assessing Alcohol and Other Drug
Prevention Programs at the School and Community Level: A Four-Step Guide to Useful
Program Assessment. Office for Substance Abuse Prevention, Washington, DC: U.
S. Government Printing Office
Kaftarian, S.J., & Wandersman, A. (Editors) (1996). Empowerment Evaluation:
Knowledge and Tools for Self-Assessment and Accountability. Sage Publications:
Thousand Oaks, California
On-line Journal Resources:
Evaluation and Program Planning
for Alcohol and Drug Abuse Information (NCADI) has many documents on line at www.health.org
Support System has a measurement development module for the construction of measures
using standardized instruments, see www.preventiondss.org
Centers for the Advancement of Prevention (CAPTs) have on-line program planning
and evaluation resources, see www.captus.org/
of Student Rights and Responsibilities : The code which specifies student
rights as well as conduct involving cheating, plagiarism, collusion, fraud, theft,
etc. can be found on the web in detail. (http://www.saff.utah.edu/CODE.HTM)
of Utah seeks to provide equal access to its programs, services and activities
for people with disabilities. In order to establish the existence of a disability
and/or request reasonable accommodation for classes, students should contact the
Center for Disabled Student Services (160 Olpin Union Building, 581-5020). If
arrangements are not necessary through the Center for Disabled Student Services,
but through the instructor alone, please bring concerns to the instructor so that
accommodations can be made.
August 21 Introduction to Program Evaluation, Class Objectives
(read Chapter 1)
What is Program Evaluation?
Trends Making Program Evaluations Necessary
Need for Program Evaluation and Evaluation Issues
Types of Roles
of Evaluator in the Evaluation (read Chapter 1)
Types of Evaluations
Purposes of the Evaluation
28 Planning the Evaluation: Resources and Barriers, (read Chapter 2 Posavac
Staffing the Evaluation
Overview of Evaluation Models
Steps to Planning
an Evaluation plus Literature Reviews,
Purpose of the Evaluation, Evaluation Questions,
Class Exercise: Evaluation Role Plays
to Conduct their own Evaluations, (read Linney, J.A., & Wandersman, A. (1996).
Empowering community groups with evaluation skills: The Prevention Plus III Model.
and Dugan, M. (1996). Participatory and empowerment evaluation: Lessons learned
in training and technical assistance. In D. Fetterman, S. Kaftarian, & Wandersman,
A. (Editors). Empowerment Evaluation: Knowledge and Tools for Self-assessment
and Accountability, Sage Publications: Thousand Oaks, California, Pp. 259-276.&
September 4 Goals and Objectives or Purpose of the Evaluation
Logic Models (the Total Evaluation Plan on one page!)
Ethics: Human Subjects,
Informed Consent, Conflict of Interest, Incentives, Cohersion or Voluntary Participation,
etc. (Draft Guidelines in class manual, Chapter 5, Posavac and Carey)
11 The Measurement of Theory and the Theory of Measurement (Chapter 4)
Reliability and Validity (read chapters 4 and 5)
Selection of Measurements and Instruments; (read Chapter 4 and Monograph by Dr.
Kumpfer on Measures, see pages 149 to 186 in the Instructor=s Manual)
Needs Assessment, Social Indicators, Archival Existing Data
Etiological Theory Testing using Structural Equations Modeling (Chapter 6)
18 Process Evaluation: Program and Fiscal Auditing (read Chapter 7 Posavac
Case Study in Empowerment Evaluation by State Auditor: Keller, J. (1996). Empowerment
evaluation and state government: Moving from resistance to adoption. In D. Fetterman,
S. Kaftarian, & Wandersman, A. (Editors). Empowerment Evaluation: Knowledge
and Tools for Self-assessment and Accountability, Sage Publications: Thousand
Oaks, California, pp. 79-99.
Monitoring and Data Collection Methods: (Chapter 7)
Program Evaluation Questions on Implementation determine type of data collected.
Types of Data and Informants
Management Information Systems
25 Evaluation Critique Presentations with 5-page Written Critiques and
Logic Model due MID TERM
2 NO CLASS ---FALL BREAK
9 Measures of Quality and Fidelity to Evidence-based Models: Evaluator
site visits, fidelity checklists, codeable interviews with staff and other stake
holders, focus groups with clients, client satisfaction measures, etc.)
Linking the Process Evaluation with the Outcome Measures
and Methods, Field Work (Fieldwork and Observation, Video taping, coding observations),
Ethnographic Methods, Determination of Themes in Transcribed Data: Software analysis
program: Nudist, Ethnograph, etc. (read Chapter 12);
Cost: Cost Benefit and Cost Effectiveness (Chapter 11)
16 Outcome Evaluations--Research Designs: Non-experimental and Quasi-experimental;
(read Chapters 8 and 9)
Evaluation Questions Drive the Evaluation Design: What are Your Questions?
Threats to Internal and External Validity
Subject Sampling Methods
Designs(Read Instructor=s Manual): Random assignmentBbarriers and advantages (read
23 Quasi-experimental Designs: Overview and Link to Questions to Answer
Review of Experimental Designs
30 Cross-site, multiple site Health Evaluations: Unique Challenges and
Read Yin, R.K., Empowerment evaluation at the federal and local level: Dealing
with quality and Stevenson, J.F., Mitchell, R., & Florin, P. (1996). Evaluation
and self direction in community prevention coalitions. (State level cross-site
evaluation). Pp 208-233. In D. Fetterman, S. Kaftarians, & A. Wandersman.
(Editors). Empowerment Evaluation: Knowledge and Tools for Self-assessment and
Accountability, Sage Publications: Thousand Oaks, California, Pp. 188-207 &
in Program Evaluation: Resistance to Evaluation and random assignment, Staffing
with Culturally Competent Staff, Cultural Sensitivity of Measures and Evaluation
Methods (assigned reading)
Case Study in Community Partnerships: Grills, C.N., Bass, K., Brown, D. &
Akers, A. (1996). Empowerment evaluation: Building upon a tradition of activism
in the African American community and Fawcett, S. et al., (1996). Empowering community
health initiatives through evaluation. Case Study with Jicarilla Apache Indian
tribe, In D. Fetterman, S. Kaftarian, & Wandersman, A. (Editors). Empowerment
Evaluation: Knowledge and Tools for Self-assessment and Accountability, Sage Publications:
Thousand Oaks, California, Pp. 123-140 & Pp. 161-187.
6 Basic Descriptive Statistics
Data Analysis (soft
ware and practice session)
13 Data Interpretation
Report: Sections of the Report, How to get your report read, Reporting to Agency,
Publishing your results
November 20 Evaluation Plan and Design due Nov 21st . Bring three
copies, one for instructor and two for reviewers.
the Evaluation Report, Changing the Program to increase quality. Working
Hypotheses for future research or program evaluations. The ongoing Evaluation
4 Evaluation Plan Presentations (Two Evaluation Design Critiques due
using The Plan Quality Index Butterfoss, F., et al., (1996). The Plan Quality
Index. In D. Fetterman, S. Kaftarian, & Wandersman, A. (Editors). Empowerment
Evaluation: Knowledge and Tools for Self-assessment and Accountability, Sage Publications:
Thousand Oaks, California. Pp. 304-331.
11: FINAL EXAM - 4:30pm
of Existing Agency Evaluation Including A Logic Model.
Each team will
be required to submit by September 25th a 5-page single spaced critique of their
team agency's existing evaluation plan or evaluation efforts (unwritten). The
critique should include:
1. Agency Name and Program to be Evaluated
2. Purpose or Impetus for the Evaluation
3. Logic Model for Program
4. Goals and Objectives of the Program
5. Summary of the Program Activities
6. Current Evaluation Activities or Plan
7. Strengths of the Existing Evaluation Plan
8. Weaknesses of the Existing Evaluation Plan
9. Suggestions for Improvement
10. Practical Barriers to Improving the Evaluation
The data used to
develop this critique should include interviews with the Program Director, Agency
Director (if different), and possibly the Management Information Specialist and
other program staff. It would help to actually observe some program activities
to improve your understanding of the program activities. Get a copy of the existing
program evaluation or any evaluation instruments if they exist. Not all team members
will need to be at the interviews. The team can split up the duties. The team
will rate each other on their contributions to the project and submit summary
results with the critique.
2. Presentation and Handout on an Evaluation Topic
Over the quarter,
graduate students will select one of the topics listed and prepare a presentation
on the topic. In addition, they will hand out in class a two page overview of
their presentation with bibliography of the most useful references and web sites
on the topic. The instructor has books and files on these topics and will meet
with all students at least one week before they present on the topic to assure
that they are covering the appropriate material in the course presentations.
3. Evaluation Plan: due Nov. 20th
By the end of the
class, the graduate students will submit a copy of their proposed evaluation plan
for their selected agency. At least five single spaced pages should be submitted
for the evaluation plan following the outline provided in the course syllabus.
Three copies should be submitted--one for the instructor and two for each reviewer.
Plan Reviews: due Dec 4th
Each student will
be given two other student=s evaluation plans to review and critique. The students
will submit their final evaluation plan critiques on Dec. 11th.
5 Midterm (Sept. 25th ) and Final Test (Dec. 11th ) These are two page multiple,
choice, True/False tests covering the textbook and Instructor=s Manual (lectures
on the textbook), plus a one page written analysis of possible issues with an
evaluation plan presented to the students at the test.
5% Logic Model - due September 25th
10% Team Critique of Evaluation and Presentation - due September 25th
15% Classroom Presentation and Overview Handout - schedule with instructor
15% Midterm Test - Due Sept. 25th
20% Evaluation Plan and Design (minimum 5 pages, single-spaced, 3 copies) -
due Nov. 20th
20% Final Test - Dec. 11, 4:30pm
10% Two Evaluation Design Critiques of Existing Evaluation Plans - due Dec. 4
The students will
be graded on a modified curve with most students receiving "A" and "B"
grades depending on the effort and performance of the class in general.