Implementing the Evaluation Plan and Analysis: Who, What, When, and How

Dorene F. Balmer, PhD, is Associate Professor of Pediatrics, Perelman School of Medicine, University of Pennsylvania; Jennifer A. Rama, MD, MEd, is Associate Professor of Pediatrics, Baylor College of Medicine; and Deborah Simpson, PhD, is Director of Education–Academic Affairs Advocate Aurora Health, Adjunct Clinical Professor of Family Medicine, University of Wisconsin School of Medicine and Public Health and Medical College of Wisconsin, and Deputy Editor, Journal of Graduate Medical Education.

Corresponding author: Dorene F. Balmer, PhD, Children's Hospital of Philadelphia, ude.pohc.liame@dremlab

<a href=An external file that holds a picture, illustration, etc. Object name is i1949-8357-13-1-129-f02.jpg" />

The Challenge

Your program has planned a telehealth-focused evaluation in response to faculty and fellow concerns about the impact of telehealth on trainee competence. You have obtained stakeholder perspectives and identified the evidence you will need to determine the educational value of telehealth. But education budgets have been frozen, faculty time is consumed by clinical responsibilities, you are juggling recruitment and orientation, and the evaluation data you do have doesn't specifically address telehealth. With these formidable barriers to program evaluation, how will you pull off an evaluation that will inform decisions and guide actions?

What Is Known

Often, we think of program evaluation as reviewing end-of-rotation ratings or compiling the annual program evaluation. Launching a new program evaluation is more complex. There are many moving parts and different players, because program evaluation unfolds in real life, not in a controlled research environment. Implementing program evaluation entails collecting evidence that answers the evaluation question and provides a means for understanding the answer in context. 1 Much like quality improvement, implementing program evaluation is both moving forward by collecting evidence and stepping back by making sure the evidence collected to date is on target to inform decisions and guide actions. Given the complexity of program evaluation, it's important to have a shared model of how you will implement the evaluation, outlining the when, who, what, and how (see the Figure ). If you plan to share your work as generalizable knowledge (versus internal improvement), consider reviewing the institutional review board criteria for review.

<a href=An external file that holds a picture, illustration, etc. Object name is i1949-8357-13-1-129-f01.jpg" />

Example of Program Evaluation Tasks, Timeline, and Accountability

Abbreviations: PD, program director; APD, associate program director; IRB, institutional review board.

Rip Out Action Items

Implementing program evaluation is complex; layout a plan for who will do what, when, and how.

Analyze data with an eye toward your evaluation question and how you will use your f indings to inform decisions and guide action.

How You Can Start TODAY

Start with a deadline and work backward (when): When do findings need to be shared to inform the decision(s)? How much time will you need to review existing data sets and collect and then analyze new data? Can you embed evaluation team meetings in existing meetings?

Form a small agile action-oriented evaluation team (who): Who is available to help (including but not limited to members of your standing program evaluation committee)? Who has access to the data you need? Who has the skills needed for specific tasks, such as tool development, data collection, and analysis?

Identify specific evaluation tasks (what): Do comments need to be analyzed? Do surveys need to be piloted? Are respondent reminders needed? Do statistics need to be run? Do you need midpoint progress reports?

Launch and monitor (how): Once you have clarified the when, who, and what, be thoughtful about how you advertise the launch and how you monitor incoming data. Revisit evaluation standards to guide the evaluation process. For example, will the data be useful for decision making (utility standard)? Is the data on target so that your decisions are grounded in credible evidence (accuracy standard)? 2

What You Can Do LONG TERM

Anticipatory planning: Prepare to analyze the data within the evaluation team or consult local experts. Do you need different software? Do you need someone to help with statistics or qualitative analysis of comments?

Mine existing data: Keep your attention focused on your evaluation question; be careful not to overstep the evidence collected for the purpose of program evaluation. 3

Discuss findings: Build in time with your team to ask questions about the findings; consider how your own values, beliefs, and experiences might influence your interpretation of findings. If there are unanticipated findings, use your team discussion to determine next steps.

Scholarly activity: If you have done systematic evaluation with a scholarly approach, you may want to share the findings with others internal and external to the organization. Ask your institutional review board if your evaluation constitutes human subject research and whether a form of review is needed for external dissemination of your work.

Resources

1. Preskill H, Torres RT. Building capacity for organizational learning through evaluative inquiry. Evaluation. 1999; 5 (1):42–60. doi: 10.1177/135638909900500104. [CrossRef] [Google Scholar]

2. Balmer DF, Riddle JM, Simpson D. Program evaluation: getting started and standards. J Grad Med Educ. 2020; 12 (3):345–346. doi: 10.4300/JGME-D-20-00265.1. [PMC free article] [PubMed] [CrossRef] [Google Scholar]

3. Balmer DF, Rama JA, Martimianakis M, Stenfors-Hayes T. Using data from program evaluations for qualitative research. J Grad Med Educ. 2016; 8 (5):773–774. doi: 10.4300/JGME-D-16-00540.1. [PMC free article] [PubMed] [CrossRef] [Google Scholar]

Articles from Journal of Graduate Medical Education are provided here courtesy of Accreditation Council for Graduate Medical Education