Conducting a Frame Analysis: Using Twitter to track narrative change for policy advocacy efforts

Presenters: Heather Lewis-Charp and Laura Pryor

Nov 13, 2019 (05:45 PM – 06:30 PM), CC M100 J

Evaluations of advocacy efforts often attempt to track changes in the narrative around policy issues.  Capturing narrative change can be challenging, but the data has the potential to produce powerful findings for stakeholders. The 2019 conference theme challenges evaluators to adapt our methods to catalyze important conversations; conducting a narrative change or “frame analysis” is an innovative approach to capture subtle shifts in how social issues are being discussed.   This demonstration will show how to mine Twitter data to document the predominant narrative around key social issues. By using the example of a health advocacy initiative, the presenters will walk the audience through designing a frame analysis for capturing narrative change and go step-by-step through the process of extracting and analyzing Twitter data using R.  The audience will gain concrete tools for utilizing Twitter data and will better understand how to draw on social media data to evaluate advocacy initiatives.

Strategic Learning: A Funder-Evaluator Panel on Responsive Approaches to Evaluating Complex Initiatives

Presenters: Sengsouvanh (Sukey) Leshnick, Daniela Berman, and Jackie Allen (Bush Foundation Education Portfolio Director)

Nov 15, 2019 (03:30 PM – 04:15 PM), CC 101 G

This panel will discuss strategic learning, an approach to evaluation that works well with complicated and complex initiatives that evolve over time and have multiple ways of achieving outcomes. These strategies present unique challenges to conventional program evaluation and require new approaches to ensure that evaluation is relevant and useful.  The Bush Foundation has been engaged in a strategic learning partnership with Social Policy Research Associates (SPR) around its individualized learning strategy to use evaluation for real-time learning and to adapt strategies based on lessons learned and changing circumstances. Strategic learning means making evaluation a part of the strategy or initiative—embedding it so that it can influence decision making.  This panel will describe the concept and principles of strategic learning and how it differs from traditional evaluation approaches. Presenters will describe what strategic learning looks like in practice and will discuss innovative methods to promote strategic learning.

Promoting the use of CRE in academic settings, private practice, and industry: Perspectives from GEDI alumni

Presenters: Laura Pryor

Nov 15, 2019 (05:45 PM – 06:30 PM), Hilton Marquette VIII

Theoretical concepts of culturally-responsive evaluation (CRE), such as fully engaging participants, recognizing historical and contemporary injustices, and emphasizing qualitative approaches, are difficult to implement in practice, given that external forces that drive evaluations, such as clients and funders, may not prioritize these aspects of evaluation. This panel will explore challenges and opportunities for implementing CRE.  Alumni of the Graduate Education Diversity Internship (GEDI) program will speak about their experiences in the context of academia, independent consulting, and industry. In small groups, we will explore questions including: How can evaluators working in academia help prepare the next generation of culturally-responsive evaluators? How can evaluators dutifully implement the principles and concepts of CRE given differing client priorities and resource constraints? How can evaluators encourage the use of CRE among colleagues within their own organizations?

Measuring Implementation Fidelity in Emerging Program Models

Presenters: Hannah Betesh with contributions from Anne Paprocki

Nov 16, 2019 (11:15 AM – 12:00 PM), Room CC M100 D

Measuring program fidelity is critical to accurate interpretation of impact evaluation findings.  But how can fidelity be measured when the intervention under study is a new model that can be expected to evolve once lessons learned are incorporated from initial implementation? This presentation will share lessons from a quasi-experimental impact evaluation of a relatively new program model for helping older women reconnect to the workforce.  As this model includes multiple components with varying levels of prior evidence, the evaluation involved determining how to both measure fidelity through site visit observations and improve the specification of key model elements in response to early fidelity assessment.  The internal and external evaluators for this project will discuss the development and deployment of a structured fidelity measurement tool, the use of early results to shape technical assistance and guidance, and the process of communicating unexpected findings to program and evaluation stakeholders at multiple levels.

Evaluating Arts Education: The art of mixing methods for measuring creativity

Presenters: Laura Pryor, Jennifer Hogg, and Rachel Estrella

Nov 16, 2019 (09:15 AM – 10:00 AM), Hilton Symphony II

The conference theme states that the role of evaluation is to provide “trusted, credible, evidence-based, and balanced conclusions about the quality, importance and value of what is relevant in our society.”  While evaluating arts education programs is no exception to this standard, this context requires evaluators to grapple with designing evaluations that include multifaceted outcomes (i.e. creativity, self-expression) alongside standardized test scores. This panel explores this challenge through discussing the methodological approach to an evaluation of the National Endowment for the Arts’ Poetry Out Loud program and reflecting on the role of evaluation in arts education. The first two panelists discuss how the qualitative and quantitative components of the evaluation attempted to balance the need for external rigor via standardized tests scores with capturing multifaceted outcomes. The final presentation discusses questions related to credible and valid findings in an arts education context and the role of stakeholders in shaping validity.