Planned Versus Actual: Lessons Learned in Assessing Fidelity to a Program Model

This blog was written by Heather Lewis-Charp and Antoinnae Comeaux of Social Policy Research Associates.


As education and workforce development programs become more focused on accountability, education and workforce development funders and stakeholders increasingly want evaluations that will rigorously assess program outcomes across different locations and different types of organizations.

Given how diverse implementation may be across sites, how can evaluators measure this variation and understand how it impacts outcomes?

One option is to assess program fidelity, or the degree to which a program is delivered as intended. Assessing program fidelity can help evaluators understand and make meaning of differential program outcomes across sites. Fidelity tools can also be helpful to funders—in determining whether grantees are doing what they proposed—and to grantees who are looking strengthen their programs.

Implementing a fidelity assessment can pose a number of challenges, but can also provide the study, evaluators, and funders with valuable information.

Challenges that evaluators face in implementing fidelity assessments:

  • Bringing subjects on board. It’s easy for program staff to think of fidelity assessment as a kind of audit, in contrast to the traditional, learning-oriented goals of other evaluations. Evaluators must be able to communicate the big ideas behind the fidelity assessment upfront, so that program staff and leadership will be willing to share information honestly and openly.
  • Representing the program model. A clearly defined program model is necessary to assessing program fidelity—and must be applicable to all program sites.
    • In cases where program designers have already developed clear guidelines for the program model, developing a fidelity tool is much easier. It can be as simple as following a logic model, or understanding that there are clearly defined processes or implementation strategies for things like recruiting participants, providing case management, and using technology-enabled learning in coursework.
    • When the evaluator is in the position of defining the essential elements of the program model, developing a fidelity tool is more complicated. In these cases, the tool can be informed by rigorous peer-reviewed research on what constitutes effective practice, as well as by what various stakeholders understand to be key program components.
  • Developing the tools. Tools must be narrow and focused enough to define key components of fidelity in a somewhat close-ended way. At the same time, the tools must be broad and flexible enough to capture different types of implementation.
  • Working with multi-evaluator teams. Fidelity assessments often require coordinating the efforts of multiple site visitors. It becomes crucial, therefore, to ensure that fidelity tools are used consistently across sites and throughout the evaluation period. This can be an intensive endeavor, but can be achieved by having evaluation teams regularly trained on the use of fidelity tools. We have found that interrater reliability is enhanced by strategies such as (1) regular question-and-answer sessions, (2) annotated guides clarifying how dimension should be defined and rated, and (3) detailed quality assurance reviews.

Why is fidelity assessment worth it?

  • A well-designed fidelity tool yields a quantitative measure of how faithfully a model has been implemented. Evaluators can subsequently use that measure to analyze other quantitative and qualitative data. For instance, evaluators can explore whether participant survey results or test outcomes differ for programs with higher fidelity. In other words, do program sites that stick to the model have better outcomes, or vice versa? Results from a fidelity assessment are a great jumping-off point for analysis and for generating theories that can be tested as an evaluation period comes to a close.
  • Fidelity assessment tools create another opportunity to understand program implementation at any given site. If conducted at distinct phases of implementation program (e.g., beginning, middle, and end) fidelity assessment can support a program’s ongoing planning efforts. For instance, if at an early assessment a site ranks relatively low in fidelity for one aspect of a program model, it can use that result to reprioritize program efforts for the near future.
  • Fidelity assessments are an additional source of useful qualitative data. The data they generate can explain, for instance, why program directors have modified implementation, or which contextual factors have influenced one aspect of a site’s fidelity rating. Depending on how fidelity tools are constructed, they can also yield useful information on how key program dimensions have been implemented. A fidelity assessment does not have to entail bar charts showing who is and who isn’t following a component of a program model—rather, it can provide one way to explore and understand what a program actually looks like across different sites, and to generate ideas for further analysis.