It’s Time to Make More Room for Program Evaluation in the Education Doctorate Program

Authors

DOI:

https://doi.org/10.5195/ie.2023.335

Keywords:

education doctorate, program evaluation, improvement science, Carnegie Project on the Education Doctorate (CPED)

Abstract

This essay highlights the value of an applied methodology course in program evaluation in the education doctorate program by exploring several benefits that it offers to enhance a doctoral student’s ability to solve complex problems of practice. Observations and recommendations are made based on designing and teaching two cohorts of EdD students in a program evaluation course. Improvement science is referenced throughout to highlight how the two may complement each other, not to place a higher value on one or the other. How and where program evaluation and improvement science appear to have possible areas of overlap, along with a brief overview of the major differences, are discussed. The author maintains that both program evaluation and improvement constitute a tremendous capacity to provide the ideas, tools, and approaches to prepare students to be the change agents they hope to aspire to be in their present and future roles as scholarly practitioners.

Author Biography

Jessica A. Marotta, Marymount University

School of Education
Marymount University

References

Buss, R. R., & Avery, A. (2017). Research becomes you: Cultivating EdD students’ identities as educational leaders and researchers and a “learning by doing” meta-study. Journal of Research on Leadership Education, 12(3), 273–301.

Byrk, A., Gomez, L., Grunow, A., & LaMahieu, P. (2015). Learning to improve: How America’s schools can get better at getting better. Harvard Education Press.

Christie, C. A., Inkelas, M., & Lemire, S. (2017a). Editors’ notes. New Directions for Evaluation, 153, 7–10.

Christie, C. A., Inkelas, M., & Lemire, S. (2017b). Understanding the similarities and distinctions between improvement science and evaluation. In C. A. Christie, M. Inkelas, & S. Lemire (Eds.), New Directions for Evaluation, 153, 11–22.

Cochran-Smith, M., & Lytle, S. L. (2009). Inquiry as stance: Practitioner research for the next generation. Teachers College Press.

Firestone, W. A., & Leland, A. S. (2021). Strategies for promoting evidence use through the education doctorate. Impacting Education, 6(4), 8–15.

Firestone, W. A., Perry, J. A., Leland, A. S., & McKeon, R. T. (2021). Teaching research and data use in the education doctorate. Journal of Research on Leadership Education, 16(1), 81–102.

Giancola, S. P. (2021). Program evaluation: Embedding evaluation into program design and development (1st ed.). SAGE Publications.

Hall, G. E., & Ford, S. M. (1987). Change in schools: Facilitating the process. State University of New York Press.

Henry, G. T., & Mark, M. M. (2003). Beyond use: Understanding evaluation’s influence on attitudes and actions. American Journal of Evaluation, 24(3), 293–314.

Herr, K., & Anderson, G. L. (2005). The action research dissertation: A guide for students and faculty. SAGE Publications.

Hinnant-Crawford, B. N. (2020). Improvement science in education: A primer. Myers Education Press.

Langley, G., Moen, R., Nolan, T., Norman, C., & Provost, L. (2009). The improvement guide: A practical approach to enhancing organizational performance (2nd ed.). Jossey-Bass.

Newcomer, K. E., Hatry, H. P., & Wholey, J. S. (2015). Handbook of practical program evaluation. Jossey-Bass.

Patton, M. Q. (1997). Utilization-focused evaluation (3rd ed.). SAGE Publications.

Pawson, R. (2006). Evidence-based policy: A realist perspective. SAGE Publications.

Pawon, R., & Manzano-Santaella, A. (2012). A realist diagnostic workshop. Evaluation, 18(2), 176–191.

Pawson, R., & Tilley, N. (1997). Realistic evaluation. SAGE Publications.

Pawson, R., & Tilley, N. (2001). Realistic evaluation bloodlines. American Journal of Evaluation, 2(3), 317–24.

Peurach, D. J., Lin Russell, J., Cohen-Vogel, L., & Penuel, W. R. (2022). The foundational handbook on improvement research in education. Rowman & Littlefield.

Piantanida, M., McMahon, P. L., & Llewellyn, M. (2019). On being a scholar-practitioner: Practical wisdom in action. Wisdom of Practice Series, Learning Moments Press.

Preskill, H., & Torres, R. T. (2000). The learning dimension of evaluation use. New Directions for Evaluation, 88, 25–37.

Weiss, C. H. (1988). Evaluation for decisions: Is anybody there? Does anybody care? Evaluation Practice, 9(1), 5–19.

Westhorp, G. (2014). Realist impact evaluation: An introduction. Methods lab. Overseas Development Institute.

Zambo, D., Buss, R. R., & Zambo, R. (2015). Uncovering the identities of students and graduates in a CPED-influenced EdD program. Studies in Higher Education, 40(2), 233–252

Downloads

Published

2023-09-05

How to Cite

Marotta, J. A. (2023). It’s Time to Make More Room for Program Evaluation in the Education Doctorate Program. Impacting Education: Journal on Transforming Professional Practice, 8(4), 50–56. https://doi.org/10.5195/ie.2023.335

Issue

Section

Themed-Reimagining Research Methods Coursework for the Preparation of Scholar-Practitioners