to Syllabi List
Public Program Evaluation
Director and Senior Scientist,
Center for Health Policy and Program Evaluation
University of Wisconsin
with disabilities foreseeing the need for accommodation should see
me as soon as possible.
Description of design and methodological issues in the evaluation of public programs.
Compares the conceptual, statistical and ethical issues of experimental, quasi-experimental
and non-experimental designs. Definitions of outcomes, sample size issues, statistical
biases in measuring causal effects of programs, and the reliability of findings
are emphasized using case studies selected from current public programs.
(Available at University Bookstore):
Michael Quinn Patton (1997) Utilization-Focused Evaluation (3rd Edition). Thousand
Oaks, CA: Sage Publications.
Peter H. Rossi
and Howard E. Freeman (1993) Evaluation: A Systematic Approach (5th Edition).
Newbury Park: Sage Publications.
Lawrence B. Mohr
(1995) Impact Analysis for Program Evaluation (2nd Edition). Thousand Oaks, CA.:
A packet of readings is in preparation, you will be notified when it is available.
I will also provide handouts and other supplemental material on an occasional
I will also provide a bibliography of additional readings which may be of interest
for further depth study, reference and assistance in pursuing your own substantive
Students will also be expected to identify relevant readings in their own substantive
field of interest which reflect application of concepts discussed in the class.
One assignment will be a paper based on at least three individually identified/selected
The course is not intended as a statistics course nor as a prescriptive methods
course. Rather, it is a survey of approaches to program evaluation and evaluation
research. The emphasis will be on conducting useful program evaluations in public
and non-profit agencies and programs. The course will expose students to the full
range of options in the professional field of program evaluation, with particular
attention to tradeoffs required based on political, methodological and resource
constraints. I will attempt to mold course content to likely career interests/plans
of the class, when possible. Students should understand that each topic covered
probably has an entire course devoted to it somewhere in the University--this
is in many ways a survey course.
I plan to combine lectures, discussion, readings and practical experience in real-world
evaluation project planning and execution. I will make extensive use of case studies,
often from my own work, from guest lecturers or published articles to illustrate
points and to initiate critical discussion. I recognize that “active learning”
is most valuable, in which course material is translated to real situations via
discussion and application of concepts and principles. I plan to build on background
knowledge and experiences of the students as well as of the instructor to enrich
the course. I will encourage and expect students to expand/supplement/illustrate
points and issues made in class from their own substantive areas of interest.
I also will use principles of formative feedback to modify/improve the course
as we progress.
My overall goal
for the course is to prepare students to facilitate and/or use evaluations in
typical public program situations and settings, including developing skills to:
1. appropriately work with intended users of evaluative information in planning
2. assess the contextual constraints, political circumstances, and resource limitations
relevant to a proposed evaluation,
3. select strategies that are appropriate and useful to generate evaluative information
for the primary intended users,
4. prepare for the numerous potential pitfalls in implementing evaluation studies,
5. to understand the strengths and limitations of various evaluation approaches
and designs, and to assess the quality and usefulness of proposed or completed
evaluations, given the intended use.
1. Class attendance
(especially important since we only meet once per week) and participation in discussions.
Note that quantity of participation is not as important as quality of participation.
will be expected to be familiar with required readings on the day assigned and
be prepared to participate in discussion based on the readings. The reading load
may be adjusted based on monthly checkpoints.
3. Each student
will be assigned to lead the discussion in class at least once, based on his or
her work on one of the individual written assignments.
4. Written assignments:
There will be three written graded assignments: (a) description and assessment
of an alternative approach to evaluation (subset of Patton’s Menu 8.1),
(b) a summary and critique of key evaluation findings and approaches in your own
substantive area of topical interest, based on at least three articles (including
meta-analysis and synthesis), and (c) a final paper/exam summarizing your position
on the various controversies, options and approaches to evaluation we have discussed
in class. Separate guidelines for each of these assignments will be provided.
These will be due as specified below or as assigned to correspond to discussion
topics you are leading. These assignments should be written individually.
5. Group Evaluation
Projects: I have made arrangements with several government and non-profit organizations
with interest in evaluation for small group projects. My emphasis with these programs
has been on providing a meaningful experience in either (a) developing an evaluation
plan for a specific program within the organization, or (b) assisting in a substantively
meaningful way in an evaluation which is already underway. A separate list will
be distributed at the second or third class. Students will be asked to provide
me with their first three choices and I will make assignments from there.
NOTE: If you have
an interest in or compelling reasons to work on a project of your own choosing,
I am open to adding projects to the list. Please see or call me before the end
of the week (i.e., by Sept. 5). My preference is that these be group projects,
not individual work.
There will be
two reports to the class from these projects. The first will be a background description
of the program/policy you are working on and the critical issues at hand, highlighting
dilemmas you may be facing. The second will be a professional presentation of
the final product, which you will also be required to make for the sponsor/staff
of the program. An executive summary must be prepared and distributed to members
of the class and to the sponsoring agency at least one week before the scheduled
presentation. A final detailed paper (including--as relevant--brief literature
review, background on the program theory/logic model, the intended uses of the
evaluation, proposed design/approach, recommended constructs/measures, data sources,
anticipated analytic model, and a ball-park budget and time line) will be submitted
to the instructor and to the sponsoring agency by the end of the semester (due
Class Participation: 10%
Discussion Lead 5%
Alternative approach to evaluation 15%
Review of articles in your area 15%
Final summary paper 20%
Group Project: 35%
Dates: (Specific dates to lead discussions/present will also be arranged)
Written Assignment 1: Sept. 30
Written Assignment 2: Nov. 4
Final Paper: Dec.16
Final Group Project Paper: Dec. 16
Schedule of Topics and Reading Assignments
for Class Lectures and Discussion
2: Course Overview and Introduction
9: Emergence of Evaluation as a Professional Discipline and Basic Issues in the
Patton, Chapters 1-3
Rossi and Freeman, Chapters 1 and 10
American Evaluation Association, (1995). “Guiding principles for evaluators,”
New Directions for Program Evaluation No. 66 (summer): 19-26.
16: Program Diagnostics, Program Theory and the Evaluation Planning Process
Patton, Chapters 6,7,10
Rossi and Freeman, Chapter 2
Wholey, Joseph S. (1987), Evaluability assessment: developing program theory.”
New Directions for Program Evaluation No. 33 (Spring): 77-92.
Chen, Huey-tsyh (1990), “Issues in constructing program theory.” New
Directions for Program Evaluation No.47(Fall):7-18.
September 23: Multiple and Alternative Models of Program Evaluation
Patton, Chapters 11 and 12
Fishman, Daniel B. (1995) “Postmodernism comes to program evaluation II:
A review of Denzin and Lincoln’s Handbook of Qualitative Research.”
Evaluation and Program Planning 18(3): 301-310.
September 30: Multiple Models, Continued
Patton, Chapters 4 and 5, 8
Rossi and Freeman, Chapter 3
7: Implementation/Process Evaluation
Patton, Chapter 9
Rossi and Freeman, Chapter 4
McGraw, Sarah, et al. (1996) “Using process data to explain outcomes.”
Evaluation Review 20 (3): 291-312.
14: Implementation/Process Evaluation--Qualitative Data
Oliker, Stacey J.
(1995). “The proximate contexts of workfare and work: A framework for studying
poor women’s economic choices.” Sociological Quarterly 36(2):251-272.
Greene, Jennifer C. (1994) “Qualitative program evaluation: Practice and
Pp. 530-544 in Denzin, N.K. and Y.S. Lincoln (editors), Handbook of Qualitative
Research. Thousand Oaks, CA: Sage.
Miller, W.L. and Crabtree, B.F. (1994). “Qualitative analysis: How to begin
making sense.” Family Practice Research Journal 14(3):289-297.
Yin, Robert K. (1994) “Evaluation: A singular craft” New Directions
for Program Evaluation No. 61 (Spring):71-84.
Reichardt, Charles S. and Rallis, Sharon F. (1994). “Qualitative and quantitative
inquiries are not incompatible: A call for a new partnership.”New Directions
for Program Evaluation No. 61 (Spring):85-91. (Also pp. 5-11 of this edition)
21: Evaluating Coalition and Community Building Programs
Robin (1989) “Community action and alcohol problems: The demonstration project
as an unstable mixture.” Pp. 1-25 in N. Greisbrecht et al., (editors), Research,
Action and the Community: Experiences in the Prevention of Alcohol and Other Drug
Problems. OSAP Prevention Monograph #4. Washington, DC: US Government Printing
Grunewald, Paul J. (1997) “Analysis approaches to community evaluation.”
Evaluation Review 21(2): 209-230.
October 28: Outcome/Impact Evaluation--Overview
Rossi and Freeman,
Mohr, Chapters 1,2, and 4
November 4: Randomized Trials
Rossi and Freeman, Chapter 6
Conrad, K. and Conrad, M. (1994). “Reassessing validity threats in experiments:
Focus on construct validity”. New Directions for Program Evaluation 63 (fall):
Lam, J. A. et al. (1994). “I prayed real hard, so I know I’ll get
in: Living with randomization.” New Directions for Program Evaluation 63
(Plus rejoinder pp. 67-71.)
Lecture: Michael Wiseman on Project New Hope
Assignment 2 due.
11: Multi-level Hierarchical Analysis in Randomized Experiments with Cluster Assignment
Murray, David M. and Peter J. Hannan (1990). “Planning for appropriate analysis
in school-based drug-use prevention studies.” Journal of Consulting and
Clinical Psychology 458 (4): 458-468.
Simpson, Judy M., N. Klar, A. Donner (1995). “Accounting for cluster randomization:
A review of primary prevention trials, 1990 through 1993.” American Journal
of Public Health 85 (10): 1378-1383.
Brown, Roger et al. (To be assigned)
Brown, Research Design and Statistics Unit, School of Nursing.
18: Quasi-experimental Designs and Analysis
Rossi and Freeman, Chapter 7 and 8
Mohr, Chapters 7 and 9
John Witte’s study of School Choice (available on internet) and critique
(also on internet)
Foster, E. M. (1995). “Why teens do not benefit from work experience programs:
Evidence from brother comparisons.” Journal of Policy Analysis and Management
14 (3): 393-414.
25: Time Series Approaches
Mohr, Chapter 9
Schwartz, S. and P. Zorn, (1988) “ A critique of quasi-experimentation and
controls for measuring program effects: Application to urban growth control.”
Journal of Public Policy Analysis and Management 7(3): 491-505.
Kessler, David A. and Sheila Duncan (1996). “The impact of community policing
in four Houston neighborhoods.” Evaluation Review 20 (6): 627-669.
2: Outcome Monitoring
Mohr, Chapter 10
Rost, Kathryn M. et al. (1996) “Does this treatment work? Validation of
an outcomes module for alcohol dependence.” Medical Care 34(4):283-294.
Guest lecture on Health Care Outcomes Monitoring (Nancy Dunham, Wisconsin
Network for Health Policy Research)
9: Improving Design, Implementation and Reporting of Evaluation Studies
Patton, Chapters 13 and 14.
Dennis, Michael L. and Boruch, Robert F. (1994). “Improving the quality
of randomized field trials: Tricks of the trade.” New Directions for Program
Evaluation No. 63 (fall): 87-101.
Dennis, Michael L. (1994). “Ethical and practical randomized field experiments.”
Pp. 155-197 in Wholey et al. (Editors), Handbook of Practical Program Evaluation.
San Francisco: Jossey Bass.
Torres, R.A., H. Preskill and M. Piontek (1997). “Communicating and reporting:
Practices and concerns of internal and external evaluators.” Evaluation
December 16: NO
NOTE: If necessary
to complete all group reports, we may meet on the scheduled finals timeslot for
Final Written Report on Group Project Due
Final Written Assignment #3 Due
Planning and Conducting an Evaluation
Adapted from Patton’s
1st edition of Utilization Focused Evaluation, (1978 ), Moberg’s Evaluation
of Prevention Programs: A Basic Guide for Practitioners (1985), and Judith Garrard’s
10 Step Model (University of Minnesota, School of Public Health , 1986) .
1. Identify and
organize relevant intended users, decision makers, and other stake holders
2. Identify and
refine the relevant evaluation questions--What is the purpose of the evaluation?.
3. Specify program
theory/logic model; goals, objectives, and/or evaluative criteria.
4. Select appropriate
evaluation approach(es) given the evaluation questions/purpose.
Cost-Effectiveness or Cost-Benefit
5. Develop methodological
details and an evaluation plan.
(experimental, quasi-experimental, non-experimental, descriptive)
Subjects/Target population, protection of subjects
Instrumentation, Data Sources and Data Collection
6. Assess Administrative feasibility--Budget, time-lines, personnel, program disruption,
likelihood of compliance? Tradeoffs?
7. Pilot test the
evaluation plan, modify, implement and monitor.
8. Summarize, analyze
and interpret the data (What statistical or qualitative approach?)
and use the findings.