Interpreting Data | National Sexual Violence Resource Center (NSVRC) Skip to main content
Get Help Escape
English Spanish

Evaluation Toolkit

Section Nine Banner: Interpreting Data

Interpreting Data

When data are analyzed, they don’t automatically tell you a story or indicate how to act on them. In order to act on the data, you need to make meaning of the data through a process of interpretation. This is the point when you look at the analyzed data and say “so what?”

This step helps you determine potential explanations for why the data came out the way they did so that you know what actions to take as a result.

For example, what does it mean if there is an increase in bystander incidents from the beginning of your intervention to the end of your intervention? Or what might it mean if the incidents did not increase?

Involving program participants in interpretation of data can provide rich and critical insights. This involvement can be relatively informal. For example, in activity-based assessment, data are collected in various sessions of a curriculum-based intervention to gauge learning integration along the way. It’s encouraged to bring concerning (or “unsuccessful” data) in to the following session to facilitate dialogue about why the participants may not have integrated the learning in the way you expected (Curtis & Kukke, 2014). This gives indications about whether the struggle was related to the evaluation instrument itself, the curriculum content, the facilitation, or something happening with the participants.

More formal options for participatory data interpretation include facilitating meetings where preliminary analyses are shared with stakeholders, and they are invited to offer feedback, reflections, and additional questions (Pankaj, Welsh, & Ostenso, 2011).

For one example of how to implement this, see the guide to using Data Placemats and What? So what? Now what? sections of the Training and Capacity Building Activities guide.

booksResource

Participatory Analysis: Expanding Stakeholder Involvement in Evaluation (PDF, 9 pages) Published by the Innovation Network, this short guide offers advice, tools, and case studies related to involving stakeholders in data analysis.

Data Analysis: Analyze and Interpret (Online Course, free account required to log in) In part three of the NSVRC Data Analysis course, you will be able to identify types of data, analyze your data, and interpret your data with averages, changes over time, and differences between groups.

Using Data

Once you’ve analyzed and interpreted your data, it’s time to answer the question: Now what?

If you are engaged in participatory data interpretation, these questions might be answered in that process, and then your job is to make good on the changes.

The “now what?” phase can help you figure out what might need to be shifted about what you’re doing and how you’re doing it. When looking at your data, you will want to consider the following questions.

What do we need to adjust about the evaluation process or tools?

Sometimes you will discover that the evaluation process or tools you used did not give you sufficient information to make judgments about the intervention or its implementation, which means your primary point of action will be to make changes to the evaluation itself to yield better data in the future.

For example, you might discover, as some other preventionists have, that the young people you work with complete their surveys haphazardly, circle the spaces between answers and write snarky comments in the margins. Or you might hold a focus group and discover that none of the participants has much to say in response to the questions you asked them. Either of these situations could indicate a problem with the questions/items you’re using or the methods of survey administration and focus group facilitation themselves.

What do we need to adjust about the nuts and bolts of the intervention (i.e., the program components)?

Perhaps the data show you that particular aspects of your programming are less effective than other aspects. For example, one preventionist noted that the data she collected from program participants consistently showed that they seemed to be integrating messages about sexism more so than they were integrating lessons about racism. It was clear that something needed to be tweaked about the discussions related to racial justice to make them more relevant and compelling to the participants. 

What do we need to adjust about program implementation (e.g., the way it is facilitated, the skill-sets of the implementers)?

It also might be the case that the components of your program need very little tweaking while the implementation needs more tweaking. For example, maybe you are not reaching the right people or maybe the people doing your community organizing or program facilitation need additional skill-building to be more effective in their work.

Communicating About Your Evaluation

In addition to using data to make changes in the ways outlined above, you also need to communicate about your evaluation to your funders and community partners. This is part of accountability and also a way to celebrate your successes and help others learn from your work.

This communication might be relatively informal (a mid-evaluation update at a committee meeting) or might be more formal (a full evaluation report or presentation). Regardless of the occasion, the way you communicate about the program and its evaluation matters. Remember, this is your story to tell – make it compelling! Consider which angle of the story you want to tell and the purpose of telling your story. You might tell the story in slightly different ways to different audiences and to meet different purposes. For example, maybe your board of directors wants to see numbers, but your community partners would rather hear stories about your work.

Ways to Communicate about Your Data and Evaluation

Data Visualization

Data visualization (also called data viz, for short) is exactly what it sounds like, ways of presenting data visually. As a field of practice, data viz draws from scientific findings and best practices in the graphic design and communication fields to help create powerful, data-driven images. The ever-popular infographic is a data viz tool that allows you to highlight important points with meaningful images in a succinct, easy to read way. To learn the basics about data visualization check out this Self-Study Guide.

Charts & Graphs

Charts and graphs are visual representations of your data that make it easier for people to understand what the data communicate. Charts and graphs can be made in many computer programs that you might have on hand, including PowerPoint and Excel. These standard graphs and charts might need to be re-designed a bit by you in order to maximize their readability and impact. People have written entire books about this issue. (Seriously! Check out this one, for example, if you really want to nerd out about this.) If you don’t have time to read a whole book but want some good tips on how to work with charts, check out the video inspiration below. Quantitative and qualitative data require different types of visualizations, and it is important to choose a type of visualization that is both appropriate for your data and that also clearly communicates the implications of the data. The default chart that your data processing software chooses might not be the best or most compelling option! Fortunately, there are guides for choosing the correct chart for both qualitative (Lyons & Evergreen, 2016) and quantitative (Gulbis, 2016) data that can help guide you through those decisions when it is time to make them. We highly recommend you run what you create through this data visualization checklist.  

Reports

Typically, people give a full rundown of their evaluation process and results in an evaluation report. These reports are shared with funders and other community partners. The reports are often long and contain more information than is useful to all interested parties, so some evaluation experts like Stephanie Evergreen recommend a 1-3-25 model that includes a 1-page handout of highlights, a 3-page summary, and a 25-page full report (Evergreen, 2015). Check out Stephanie’s Evaluation Report Layout Checklist; it will help you make sure your content and layout are maximized for easy reading and impact (Evergreen, 2013b).

If you are working with youth and need an interactive way to share data, take a look at Stephanie Evergreen's Data Fortune Teller tool and customizable form. 

Infographics

Infographics provide an opportunity for you to visually represent a variety of data points succinctly and powerfully. Generally, infographics consist of one page worth of data and information to communicate one or maybe two main points. Several online programs offer free or low-cost options for making infographics and include templates, images, charts, and options to upload or input data. Check out Piktochart and the infographic section of Animaker to see examples of what you can do. (Animaker lets you animate infographics to tell a more dynamic story!)

outline of lightbulbInspiration

Stephanie Evergreen’s presentation 8 Steps to Being a Data Presentation Rock Star is a fun way to learn the basics about communicating about data (Evergreen, 2013a). While the presentation primarily focuses on creating slide-decks, the skills also correspond to creating data visualizations for reports and other types of communication media.

Additional Resources

Communicating and Disseminating Evaluation Results Worksheet (PDF, 2 pages)   

Evergreen Data: Stephanie Evergreen is a data visualization consultant who has authored two great books on data visualization. Her website and blog offer useful free resources, including the Qualitative Chart Chooser referenced above. You can join the Data Visualization Academy for more robust assistance.

DiY Data Design offers online courses and coaching around data visualization needs.

Data Analysis: Share Your Findings (Online Course, free account required to log in) In the forth section of the NSVRC Data Analysis Online Course, you’ll learn about data visualization to report and summarize your findings.

 

References

Curtis, M. J., & Kukké, S. (2014). Activity-based assessments: Integrating evaluation into prevention curricula. Retrieved from the Texas          Association Against Sexual Assault: http://www.taasa.org/wp-content/uploads/2014/09/Activity-Based-Assessment-Toolkit-Final.pdf

Evergreen, S. (2013a, December 19). 8 steps to becoming a reporting Rockstar [Video file]. Retrieved from https://vimeo.com/82318228 

Evergreen, S. (2013b). Evaluation report layout checklist. Retrieved from http://stephanieevergreen.com/wp-content/uploads/2013/02/ERLC.pdf

Evergreen, S. (2015). What TLDR means for your evaluation reports: Too long didn’t read (let’s fix that). Retrieved from http://stephanieevergreen.com/wp-content/uploads/2015/11/TLDRHandout.pdf

Gulbis, J. (2016, March 1). Data visualization – How to pick the right chart type? Retrieved from https://eazybi.com/blog/data_visualization_and_chart_types/

Lyons, J., & Evergreen, S. (2016). Qualitative chart chooser 2.0. Retrieved from http://stephanieevergreen.com/wp-content/uploads/2016/11/Qualitative-Chooser-2.0.pdf

Pankaj, V., Welsh, M. & Ostenso, L. (2011). Participatory analysis: Expanding stakeholder involvement in evaluation. Retrieved from http://www.pointk.org/client_docs/innovation_network-participatory_analysis.pdf

Back   Index   Next

If you are looking for resources and information to support your journey of evaluating sexual violence prevention work, then you have come to the right place. We built this toolkit to increase your knowledge, give you access to useful tools, and point you toward additional resources. The goal of this toolkit is to increase the capacity to implement program evaluation for sexual violence prevention work by providing tools and guidance for both program implementers and those who support them.

Evaluation is a vast and diverse field of practice. Properly implemented and integrated with program implementation, it can help us do every aspect of our jobs better, and enable us to create deep and lasting change in our communities. Because there are so many types of evaluation and methods that can be used, the process of developing and implementing the appropriate evaluation for our work can sometimes feel overwhelming or not worthwhile. Sometimes when planning an evaluation we pick the easiest method, even if it is not the most useful for the context and content of our work. The purpose of this toolkit is to present you with evaluation tools that are accessible, implementation processes that are reasonable, and guidance on selecting the most effective methods for your evaluation tasks.

This toolkit offers guidance on evaluation within the context of primary prevention. This toolkit will equip you as prevention workers at the local and state levels with the knowledge and skills necessary to make strategic decisions about evaluation, including

  • Designing and implementing evaluation of primary prevention programs
  • Providing support to others doing evaluation work
  • Understanding the language of evaluation to engage with consultants or other partners.

In each section of the toolkit, you will find one or more of the following to assist you in your capacity building efforts. You can browse the toolkit through the left menu or use the "next" and "back" buttons at the bottom of each section. Look for the icons below throughout the toolkit to link you to more information and resources.

Resources for Further Learning

More information on given evaluation topics and issues, most of which are free resources that are accessible on the Internet. Resources that are not free will be indicated with a $.

one book standing on end and one book laying down with bookmarkIn addition to section-specific resources, check out the following Self-Study Guides for curated sets of resources that will lead you through a course of self-study on particular evaluation topics.

clipboard and penTools for Implementation

There are worksheets and other tools that can assist you in planning for or implementing program evaluations. Use the worksheets as you move through the toolkit.

four people at a table with one person standing upTraining Activities

  • Training activities that can assist you in spreading the knowledge and skills you’ve gained about evaluation with other people.

graphic of file folderExamples

  • Examples or case studies from the field can help you understand how certain evaluation methods or concepts have been applied by others who are working to prevent sexual violence.

lightbulbInspirations

What is evaluation?

"Evaluation is the process of determining the merit, worth and value of things, and evaluations are the products of that process." Michael Scriven, Evaluation Thesaurus, Page 1You may already have some basic idea of what evaluation is or what it can do for you.

What comes to mind when you hear the term “evaluation?” Do you think about pre- and post-tests that you hand out during education-based programming? Do you think about annual staff evaluations or about surveys you fill out at the end of a conference to rate your experience of the event?

The term “evaluation” covers all of these and a variety of other activities and processes. This drawing by cartoonist and evaluator Chris Lysy of Freshspectrum, LLC., offers a definition of evaluation from one prominent evaluator, Michael Scriven. This definition points out that evaluation is about more than just determining whether a program or initiative was implemented the way it was intended to be implemented, or if that implementation yielded the intended results. Instead, evaluation is a process of “determining the merit, worth and value” of the elements of a program and its implementation.

Program evaluation, the specific type of evaluation that is most relevant to sexual violence prevention workers, seeks to determine how meaningful the programs and their outcomes are to the people who are impacted by them. That is, just knowing that we implemented a program the way we intended to and that we achieved the outcomes we intended to achieve does not necessarily mean the outcomes were meaningful to the participants, that the program was as worthwhile as it could be, or even that it reached the right people. Evaluation can help us to understand these things. 

What can evaluation do for prevention workers?

Evaluation is multi-faceted. Because of this, evaluation offers myriad benefits to us as prevention workers. Specifically, evaluation can benefit us in the following ways:

  1. We can use evaluations to measure the changes we help create in our communities through our prevention initiatives.
  2. Ongoing evaluations show us where we are doing well and where we can improve.
  3. Through evaluation, we develop language to help us tell the story of our initiatives to funders and community partners in a way that goes beyond our impressions of the program’s impact and import.
  4. Evaluation is one method of practicing accountability to participants, community members, and funders.
  5. Evaluation can keep us on track and offer ways to make strategic decisions when we need to shift our plans.
  6. Evaluation helps us determine if the work we are doing is the most meaningful work and/or the right work for the right people at the right time.
  7. Evaluation gives us a chance to participate in developing an evidence base for our work. While the field collectively develops an evidence-base for primary prevention of sexual violence, the best way to contribute to this effort is to design and evaluate theoretically sound programming. Having program planning and evaluation skills will assist you in participating in that process of building an evidence-base that is informed by the practice on the ground.

Not all of these benefits come from the same type of or approach to program evaluation, so read on to find out more about how to achieve these benefits from the process and products of evaluation.

lightbulbInspiration

Watch these videos from the NSVRC Mapping Evaluation Project to hear preventionists talk about evaluating prevention work. Notice how their approaches, enthusiasms, and focuses differ and how they are similar. Do you prefer audio podcasts? Check out this Resource on the go episode: Why are We Talking About Evaluation? 

What does evaluation have to do with social justice?

“A social justice-oriented evaluation examines the holistic nature of social problems. It seeks to increase understanding of the interdependency among individual, community, and society using a more judicious democratic process in generating knowledge about social problems and social interventions and using this knowledge to advance social progress” (Thomas & Madison, 2010, p. 572)

Evaluation may not immediately strike you as a part of your work that can directly support creating a more just and equitable world. For some, evaluation is seen as an innocuous, neutral part of their work. Others might see it as a hindrance to, or in direct contradiction to, the social justice aspects of their primary prevention efforts.

teal file folderExamples from the field:

Joe is a preventionist who works with people with disabilities, and he reports that the evaluation procedures he has been asked to use are not appropriate for the population he works with. The written evaluation tools especially often resulting in participants feeling shamed or stupid when they cannot understand what is being asked of them or are unable to read.

Amanda does school-based prevention work and says that written pre- and post-tests hinder her ability to build rapport with students because such instruments feel like just another test or exam students have to take. Since the pre-test is the first interaction she has with students and the post-test is the last one, she feels like her time with students neither starts off nor ends on the right foot.

Jackson works primarily with marginalized communities and notes that the people with whom they work are weary of being studied or tested and distrust additional attempts to collect data about them and their lives.

It is difficult to hear stories like these and not think that such evaluation efforts may be harmful to a vision of a more just world. Are these problems inherent in evaluation as a practice or discipline? Many prominent evaluators believe that evaluators should be agents of change working to improve the lives of marginalized peoples in the communities in which they work (Mertens, 2009, p. 207). In fact, one of the principles of Empowerment Evaluation says that “Evaluation can and should be used to address social inequities in society” (Fetterman, 2015, p. 27).

Evaluation does not just measure our social justice work. Depending on how we implement an evaluation, it can either help or hinder progress toward the world we seek to create through our initiatives. We have to keep in mind the ways in which evaluation is political. Evaluations influence the way funders, organizations, and even politicians make decisions about funding and programmatic priorities. Moreover, the data we collect and the way it's shared tells a story about the people with whom we work. The potential impacts of our data collection, analysis, and interpretation cannot be mere afterthoughts to our evaluative processes.

Every time we make a decision about evaluation, we have to weigh issues of justice, access, and equity. We collect data from and about real people that will have impacts on the lives of those real people. The processes of evaluation can make it easy for us to forget this.

Our approaches to evaluation can serve as an integral part of our work to build more just and equitable communities, if our approaches mirror the changes we want to create. Yet, the types of data we collect and the ways we analyze, interpret, use, and share these data can impact the way other community partners and funders think about and understand the issue of sexual violence, injustice in our communities, and the solutions to these issues.

Through our evaluation practices, we model behaviors related to the following concerns:

  1. Transparency
  2. Consent/Assent
  3. Power-sharing
  4. Relationship-building

For a step-by-step process on to avoid racism, sexism, homophobia and more in data collection and analysis check out We All Count's Data Equity Framework. You can also listen to our podcast on Data Equity.

Evaluation and Culture

“Those who engage in evaluation do so from perspectives that reflect their values, their ways of viewing the world, and their culture. Culture shapes the ways in which evaluation questions are conceptualized, which in turn influence what data are collected, how the data will be collected and analyzed, and how data are interpreted” (American Evaluation Association, 2011, p. 2.).

As the quote above highlights, as people who plan and conduct evaluation, our own cultural backgrounds influence our approach to evaluation. Doing social justice evaluation work that is valid and useful requires engaging in culturally responsive practices. Every aspect and stage of evaluation needs to take into account and be responsive to the culture of the people participating in it, especially the cultures of the people who will be most impacted by the evaluation and from whom data will be collected.

A note on terminology:
As we continue to refine our understanding and practice of what it means to constructively and respectfully engage with people across cultures, the terms used to refer to this process can also shift. Perhaps you have heard terms like “cultural competence” or “cultural humility.”

For example, if you’re working with groups who value and prioritize oral communication over written communication, then oral methods of data collection (e.g., storytelling, interviews, etc.) will probably be met with less resistance (and possible enthusiasm) than will paper-and-pencil measures like questionnaires.

graphic of light bulbExample: The Visioning B.E.A.R. Circle Intertribal Coalition’s (VBCIC) program Walking in Balance with All Our Relations is based on a circle process. This process involves participants sharing, one-at-a-time, in response to quotes or information offered to the group by a facilitator, the Circle Keeper, who is also part of the circle. Evaluators hired to evaluate the project worked closely with the Circle Keeper to identify and implement evaluation processes that were not intrusive to the group but rather focused on the storytelling and sharing aspects of the circle as part of data collection. Circle Keepers were given evaluative prompts or questions to share with the group, and the participants responded during one round of the circle. Piloting these ideas in circles allowed the evaluators and Circle Keeper to continue to refine prompts and collection of the data (Ramsey-Klawsnik, Lefebvre, & Lemmon, 2016).

For an update on how The Visioning B.E.A.R Circle is advancing their evaluation practices, listen to our podcast episode: Using an Indigenous Circle Process for Evaluation

Participatory Evaluation

“While participatory approaches may involve a range of different stakeholders, particular attention should be paid to meaningful participation of programme participants in the evaluation process (i.e., doing evaluation ‘with’ and ‘by’ programme participants rather than ‘of’ or ‘for’ them)”( Guijt, 2014, p. 4) .

To move toward more equitable power sharing among stakeholders in evaluation efforts, we have to ask ourselves what it means to do participant-centered evaluation and what it means to do evaluation that aligns with the principles of our vision and programming.

One way to increase the social justice-orientation and cultural relevance of evaluations is to make them participatory.

Participatory evaluations can move beyond mere stakeholder involvement in decision-making by including program participants and community members in conducting various phases of the evaluation as co-evaluators. This can happen at any stage or all stages:

  • Planning/Design
  • Data Collection
  • Data Analysis
  • Data Interpretation
  • Evaluation Reporting
  • Evaluation Use

This list is a generic set of steps that are involved in many types of evaluation, but the specific steps of your own evaluation might look different from this. For example, not all evaluations involve formal reporting, though all evaluations ideally involve some way of sharing of the data, conclusions, and planned actions.

The level and timing of participant involvement needs to be determined based on a variety of variables, keeping issues of equity at the forefront of the process.

For example, you must consider issues such as the following:

  • If everyone on the evaluation team except the program participants are being compensated for their time, is that fair?
  • If outside forces (e.g., funder deadlines) have you crunched for time during one or more phases of the evaluation, can those phases still be participatory? If so, how? If not, how do you still solicit input from impacted stakeholders and build in ways for later adjustments? Is there room to have a conversation with the funder about your vision for participatory evaluation and find out if there can be some leeway in your deadlines?
  • Is there commitment and buy-in from key community leaders for the process of participatory evaluation? Will the variety of participatory stakeholders’ voices and input be taken seriously?

Participatory evaluation involves building your own (and your agency’s) capacity to facilitate collective processes with shared decision making. Additionally, you will need to find ways to build the capacity of stakeholders who are new to program evaluation. They need to be equipped with the knowledge and skills to fully participate in whatever aspects of the evaluation they are involved in.

lightbulbInspiration

Check out part 1 and part 2 of Katrina Bledsoe (2014) discussing culturally responsive evaluation.

Watch "Cultural Humility: People, Principles and Practices," a 30-minute documentary by San Francisco State professor Vivian Chávez, that mixes poetry with music, interviews, archival footage, and images of community, nature and dance to explain what Cultural Humility is and why we need it.

 

one book upright one book laying downResources

CREA in the 21st Century: The New Frontier : This blog from the Center for Culturally Responsive Evaluation and Assessment features articles about critical issues related to culturally-responsive evaluation practice.

Participatory Approaches : (PDF, 23 pages) This Methodological Brief from UNICEF provides a very accessible and detailed introduction to participatory program evaluation

Participatory Evaluation : (Online Article) This page on BetterEvaluation gives a brief overview of participatory evaluation.

Self-Study Plan: Integrated, Creative, and Participatory Evaluation Approaches (Intermediate): (PDF, TXT) This self-study guide, which is a part of the Evaluation Toolkit, focuses on integrated, creative, and participatory approaches to evaluation. It includes up to 6 hours of online training options, as well as, in-person training opportunities. This intermediate level plan will assist learners in describing the benefits of using participatory approaches to evaluation, identifying creative evaluation approaches, and developing a plan for integrating participatory options into evaluation work.

Putting Youth Participatory Evaluation into Action : (Video Presentation) This video presentation by Katie Richards-Schuster explains the process and benefits of engaging youth in evaluation work.

Evaluating Culturally-Relevant Sexual Violence Prevention Initiatives: Lessons Learned with the Visioning B.E.A.R. Circle Intertribal Coalition Inc. Violence Prevention Curriculum : (Recorded Webinar) This recorded webinar explores evaluating a culturally relevant prevention program and lessons learned.

Case Study: Culturally Relevant Evaluation of Prevention Efforts : (PDF, 14 pages) This case study examines the evaluation process of a culturally specific violence prevention curriculum.

Statement On Cultural Competence in Evaluation : (PDF, 10 pages) The American Evaluation Association developed this statement on cultural competence in evaluation to guide evaluators and also help the public understand the importance of cultural competence in evaluation practice.

Handout

Tools, Methods, and Activities for Participatory Evaluation

Training and Capacity Building Activities

  • four people at a table with one person standing upTo explore implications and responses to integrating social justice principles into evaluation, see the activity (begins on pg. 8) exploring social justice quotes.

 

References

American Evaluation Association. (2011). American Evaluation Association public statement on cultural competence in evaluation. Retrieved from https://www.eval.org/Community/Volunteer/Statement-on-Cultural-Competence-in-Evaluation  

Fetterman, D. M. (2015). Empowerment evaluation: Theories, principles, concepts, and steps. In D.M. Fetterman, S. J. Kaftarian, & A. Wandersman (Eds.), Empowerment evaluation (2nd ed.).  Los Angeles, CA: Sage.

Guijt, I. (2014). Participatory approaches (Methodological brief #5). Retrieved from the United Nations Children’s Fund (UNICEF) Office of Research: https://www.unicef-irc.org/publications/pdf/brief_5_participatoryapproaches_eng.pdf

Mertens, D. M. (2009). Transformative research and evaluation. New York, NY: Guilford Press.

Thomas, V. G., & Madison, A. (2010). Integration of social justice into the teaching of evaluation. American Journal of Evaluation, 31, 570-583. doi:10.1177/1098214010368426

Evaluation involves more than setting outcomes and determining data collection methods to measure the achievement of those outcomes. Just as prevention programming can be driven by one or more theories and approaches – and the programming will vary depending on the theories and approaches chosen – evaluation also varies depending on the approach or framework chosen to guide it.

The following section describes different paradigms and approaches to evaluation. Most of these approaches vary somewhat in their purpose or overarching theoretical basis, but in practice, they draw from the same or similar methods for collecting data and answering evaluative questions.

The approach chosen by preventionists will depend upon

  • the purpose of the evaluation,
  • the skills of the evaluators,
  • the agency’s philosophical orientations, and
  • the resources available for implementing the evaluation.

The following approaches are relevant to sexual violence prevention work. Although the approaches have different names, some of them overlap or can be used in conjunction with each other. Additionally, the primary duties or level of involvement of the evaluator varies depending on the approach. Having a general understanding of these different approaches to evaluation will help you make informed decisions about the best approach for your own evaluation needs at various times.

 Click on the links below to learn more about each approach:

Empowerment Evaluation

Empowerment evaluation focuses primarily on building the capacity of individuals and organization to conduct their own evaluations. The evaluator’s role includes carrying out particular evaluation tasks and facilitating capacity building within an organization and among stakeholders. Among the principles that drive empowerment evaluation are community ownership, inclusion, democratic participation, and social justice.

one book standing one book laying down with bookmarkFor more information on Empowerment Evaluation:

The Principles of Empowerment Evaluation (Recorded Webinar) This webinar features David Fetterman giving an overview of Empowerment Evaluation.

Evaluation for Improvement: A Seven-Step Empowerment Evaluation Approach For Violence Prevention Organizations (PDF, 104 pages) This guide from the CDC includes extensive, step-by-step guidance on hiring an empowerment evaluator and is specifically geared toward organizations engaged in violence prevention work.

Empowerment Evaluation: Knowledge and Tools for Self-Assessment, Evaluation Capacity Building, and Accountability (Book$) This book by David M. Fetterman, Shakeh J. Kaftarian, and Abraham Wandersman provides an in-depth look at empowerment evaluation and includes models, tools, and case studies to help evaluators with implementation.

The Ohio Primary Prevention of Intimate Partner Violence & Sexual Violence Empowerment Evaluation Toolkit. (Online Toolkit) The Ohio Domestic Violence Network, in partnership with the Ohio Department of Health (ODH) Sexual Assault and Domestic Violence Prevention Program, is pleased to release a Primary Prevention of Sexual and Intimate Partner Violence Empowerment Evaluation Toolkit. This toolkit was developed by Sandra Ortega, Ph.D., and Amy Bush Stevens, MSW, MPH, and supported through funding by the Centers for Disease Control and Prevention's Domestic Violence Prevention Enhancement and Leadership Through Alliances (DELTA) Program and Rape Prevention Education Program. The intended audience for this toolkit is local primary prevention providers, particularly those who are beginners or who have intermediate level skills in program evaluation. This toolkit could also be used by evaluation consultants as a source of training and technical assistance materials. It includes tools on needs and resource assessments, logic models, evaluation methods, data collection and analysis, and presenting and using evaluation findings.

Utilization-Focused Evaluation

Utilization-Focused Evaluation puts evaluation use at the center of all aspects of planning and implementing an evaluation. Early stages of the evaluation process involve identifying both the intended users and the intended uses of the evaluation. Evaluators working from this approach are responsible for coordination and facilitation of various processes. As a framework, Utilization-Focused Evaluation can be implemented in a variety of different ways.

one book standing one book laying down with bookmarkFor more information on Utilization-Focused Evaluation:

Utilization-Focused Evaluation: A Primer for Evaluators (PDF, 132 pages) In this document, Ricardo Ramirez and Dal Brodhead provide evaluators with guidance on how to implement a 12-step evaluation guided by a utilization-focused framework. Examples are included for each step and case studies are provided in the appendix to give more substantial information about Utilization-Focused Evaluation in practice.

Utilization Focused Evaluation (Online Article) This article on the BetterEvaluation website provides a brief overview of UFE, including both a 5-step and a 17-step framework, an example, and advice for conducting such an evaluation. 

Essentials of Utilization-Focused Evaluation (Book$) This book by Michael Quinn Patton gives extensive guidance on implementing UFE.

Transformative Mixed Methods Evaluation

Transformative Mixed Methods is a framework for both research and evaluation that focuses on centering the voices of marginalized communities in the design, implementation, and use of evaluation (Mertens, 2007). This framework requires evaluators to be self-reflective about their own positions and identities in relation to program participants and to prioritize the evaluation’s impact in the direction of increased social justice.

one book standing one book laying down with bookmarkFor more information on Transformative Mixed Methods:

Transformative Paradigm: Mixed Methods and Social Justice (PDF, 16 pages) This article by Donna Mertens outlines the basic assumptions, approaches, and social justice implications of Transformative Mixed Methods.

Transformative Research and Evaluation (Book$) This book by Donna Mertens provides an extensive overview of the theory and practice of Transformative Mixed Methods, including examples and guidance on analyzing and reporting data from transformative evaluations.

Actionable Evaluation

Actionable evaluation is an approach that focuses on generating clear, evaluative questions and actionable answers to those questions (Davidson, 2005). This approach recommends the use of rubrics to track and measure criteria related to outcomes rather than focusing on distinct indicators of outcome achievement. This approach is especially useful for tracking similar interventions or different interventions with similar outcomes that are implemented in various contexts and sites. An evaluator using this approach would facilitate the process of rubric design, collect and analyze the data, and assist the organization with using the data. They might also build the organization’s capacity to continue using the rubrics on their own.

one book standing one book laying down with bookmarkFor more information on Actionable Evaluation:

Actionable Evaluation Basics  (Minibook, various formats$) This accessible and useful evaluation minibook by E. Jane Davidson gives an overview of actionable evaluation and guidance on how to implement it.

Activity-Based Evaluation

Activity-Based Evaluation (or activity-based assessment) focuses on integrating data collection into existing curriculum-based efforts to assess learning integration at distinct points throughout the intervention (Curtis & Kukké, 2014). Through this approach, facilitators can get real-time feedback to make improvements in the intervention. An evaluator using this approach might design the data collection materials and assist with data collection, analysis, and use, or they might facilitate building the organization’s capacity to use this methodology.

one book standing one book laying down with bookmarkFor more information on Activity-Based Evaluation:

Activity-Based Assessment: Integrating Evaluation into Prevention Curricula:  (PDF, 32 pages) This toolkit from the Texas Associating Against Sexual Assault and the Texas Council on Family Violence provides an introduction to integrating evaluation into an educational curriculum, gives examples of data collection tools to use, and makes suggestions for analysis and use of the data.

Human Spectrogram Online Course (15 minutes) This free online interactive learning tool (NSVRC, 2019) walks participants through how to implement an activity-based evaluation through the use of the human spectrogram. Information covered includes an overview of the human spectrogram, a preparation checklist, and additional resources from the NSVRC Evaluation Toolkit. Creating a free account with NSVRC's online campus is required.

Participatory Evaluation

Participatory approaches to evaluation focus on engaging program recipients and community members in evaluation planning, implementation, and use (Guijt, 2014). These  approaches can increase community buy-in for evaluation and make the evaluation more credible to the community through mobilizing various community stakeholders and program participants in the evaluation process. An outside evaluator would support participatory evaluation through facilitating the various processes involved in planning, implementation, and use of the evaluation.

one book standing one book laying down with bookmarkFor more information on Participatory Evaluation:

Participatory Approaches: (PDF, 23 pages) This Methodological Brief from UNICEF provides a very accessible and detailed introduction to participatory program evaluation

Participatory Evaluation: (Online Article) This page on BetterEvaluation gives a brief overview of participatory evaluation.

Putting Youth Participatory Evaluation into Action: (Video Presentation) This video presentation by Katie Richards-Schuster explains the process and benefits of engaging youth in evaluation work.

Developmental Evaluation

Developmental Evaluation focuses on initiatives implemented in complex environments and tracks the development of these initiatives that must be dynamic and responsive to their environments. (Gamble, 2008). Under the Developmental Evaluation framework, the evaluator is a more active part of the program team, providing real-time feedback to impact the intervention and engaging stakeholders in making meaning of the data collected.

one book standing one book laying down with bookmarkFor more information on Developmental Evaluation:

A Developmental Evaluation Primer (PDF, 38 pages) This short guide provides an overview to developmental evaluation and guidance on conducting evaluations through the developmental approach.

Developmental Evaluation for Equity-Focused and Gender-Responsive Evaluation (Recorded Webinar) This webinar features evaluator Michael Quinn Patton giving an overview of developmental evaluation.

Developmental Evaluation: Applying Complexity Concepts to Enhance Innovation and Use (Book$) This book by Michael Quinn Patton provides a thorough introduction to developmental evaluation theory and practice. It is written in an accessible manner and includes real work examples.

Principles-Focused Evaluation

Principles-focused evaluation, a type of Developmental Evaluation, measures a program against a set of evidence-based principles that should drive programming and lead to improved outcomes. This means that programs following similar principles can be evaluated on similar criteria even when their programming or implementation looks different.

light bulb

Watch this 44 minute NSVRC Evaluation Toolkit webinar which introduces preventionists to principles-focused evaluation (PFE) and how it has been used to support sexual violence prevention programs. Viewers will learn how to incorporate their own sexual violence prevention effectiveness principles into their evaluation approach. This learning will be accompanied by examples from Washington state’s experience of engaging preventionists in the process of identifying principles – a project supported by the State Department of Health. If you are short on time, check out this overview on our podcast.

 

one book standing one book laying down with bookmarkFor more information on Principles-Focused Evaluation:

Principles-Focused Evaluation: The GUIDE (Book$) This book by Michael Quinn Patton provides a complete overview of Principles-Focused Evaluation and a guide for implementation.

 

References

Curtis, M. J., & Kukké, S. (2014). Activity-based assessments: Integrating evaluation into prevention curricula. Retrieved from the Texas Association Against Sexual Assault:               http://www.taasa.org/wp-content/uploads/2014/09/Activity-Based-Assessment-Toolkit-Final.pdf  

Davidson, E. J. (2005). Evaluation methodology basics: The nuts and bolts of sound evaluationThousand Oaks, CA: Sage.

Gamble, J. A. A. (2008). A developmental evaluation primer.. Retrieved from Better Evaluation: https://www.betterevaluation.org/sites/default/files/A_Developmental_Evaluation_Primer_-_EN.pdf

Guijt, I. (2014). Participatory approaches (Methodological brief #5). Retrieved from the United Nations Children’s Fund (UNICEF) Office of Research: https://www.participatorymethods.org/files/Participatory_Approaches_ENG%20Irene%20Guijt.pdf

Mertens, D. M. (2007). Transformative paradigm: Mixed methods and social justice. Journal of Mixed Methods Research, 1(3), 212-225. doi:10.1177/1558689807302811


This section provides an introduction on how to do evaluation and outlines general steps to take when conducting a program evaluation. While this section does contain tips and tools, they are best implemented in the context of a well-defined and specific evaluation approach, which can be chosen and designed to be appropriate to both the specific type of intervention you want to evaluate and the context in which you will evaluate it.

So, how do you do evaluation?

Evaluation Flow ChartAt the most basic level, evaluation involves asking a few important questions about your work, collecting information that helps you answer those questions, making sense of the information, and acting on that information.

These steps all have more formal names, and many types of approaches to evaluation have many more specific steps. Just keep in mind that evaluation involves more than the collection of information or data. The process of evaluation starts way before you begin collecting data and does not end with analysis of the data.  That is, if you do a written pre- and post-test as part of your program, that’s a tool you use in evaluation, not the entire evaluation. And if you’re looking to improve your evaluation, that process will likely involve more than just revamping the instruments you use to collect information.

Who needs to be involved?

Many approaches to evaluation give significant focus to the question of who needs to be involved in program evaluation. As with most other decisions you will need to make around evaluation, who to involve – and when to involve them – depends on many different factors.

Consider the following questions (also available as a worksheet) to help guide your brainstorming about who to include:

  • Who is impacted by the program or by the evaluation?
  • Who makes decisions about or can impact how the program or the evaluation is implemented?
  • Whose voices are most in need of amplification during the process of evaluation planning?
  • To whom will the evaluation need to speak? To whom do you need or want to tell the story of your work?
  • What areas of expertise do you need to plan the evaluation? Who can you draw on for that expertise? (This question will likely need to be revisited as the evaluation takes shape.)

After you have a sense about who might need to be included, you’ll also want to consider other factors such as the following:

  • What is the timeframe for planning the evaluation?
  • What resources are available for planning (e.g., can you offer payment or any sort of stipend for participation in evaluation planning)?
  • What is the level of buy-in for evaluation among the various groups and people named above?

You might want to sketch out a diagram that highlights these parties, including their relationship to the evaluation and to each other. You can use the Mapping Your Evaluation System handout or draw it by hand on a blank sheet of paper. Consider the following basic chart:

Chart of evaluation relationships

This shows the primary stakeholders in many evaluations: funders, agency staff members, and participants. Additionally, if your work is collaborative, you may have community partners who are impacted by or who impact the evaluation. This diagram could expand rather quickly by taking into account the setting in which you’re working with the participants. If you’re working in a school, teachers, parents, and administrators might be added to the chart. If you’re doing norms change work within a community, then the possible stakeholders expand to include all the members of that community.

Involving stakeholders who impact and are impacted by the evaluation is part of what you do to set yourself up for success. Here are some examples of what that means:

Funders: Funders often stipulate the type of evaluation or evaluation measures that must be used for programs they fund. Having funders at the table for planning the evaluation helps you align with their requirements. Also, sometimes funders require processes or methods that are not ideal for your population or your programming. If you involve them in the process, you have a chance to advocate for changes if any are needed and the funders can see firsthand why the changes might be necessary.

Program Participants: Program participants can serve as an important reality-check to both program planning and evaluation planning processes. Since by-and-large, evaluations eventually collect data from and about a particular population, their direct input with regards to credible and culturally responsive ways to do that can help your evaluation orient itself toward meaningful questions and valid ways to answer them. If you’re taking a participatory approach to program evaluation, you’ll want to involve program participants in the process as soon as it is meaningful to do so.

Community Partners: Other community partners might be brought on board for their expertise in an area of program development and evaluation or because they also work with particular populations of interest, among many other reasons.

Tips

Keep in mind that involving multiple stakeholders also involves managing and accounting for multiple power-dynamics operating within the evaluation team. It’s important to consider how those dynamics will be negotiated. If you’ve considered answers to the questions above, you can probably already see places where power asymmetries are apparent. For example, you might have funders and program participants involved. Consider in advance how that will be navigated and how the participation of the most marginalized players – likely the program participants – will be both robust and valued. If you cannot ensure meaningful involvement for the participants in the planning process, it might be preferable to engage them in other ways rather than run the risk of tokenizing them or shutting them down through the process.

When you know what the limits and expectations of participation are, you can share them with potential stakeholders and let them make a decision about their involvement. If they do not want to be a full member of the evaluation team, perhaps they will still want to assist with one of the following steps:

  • reviewing evaluation plans or tools,
  • participating in analysis and interpretation, or
  • brainstorming ways to use the data.

Keep in mind that the people you involve in planning the evaluation may or may not be the same people you involve in implementing the evaluation. Consider where people will have the most impact and prioritize their involvement in the steps and decision-making points accordingly. This helps avoid causing fatigue among your evaluation team and also increases the likelihood that people will feel good about their involvement and make meaningful contributions.

When do we start?

Ideally, you will plan the evaluation as you’re planning the program, so that they work seamlessly together and so that evaluation procedures can be built in at all of the most useful times during implementation.

A detailed planning process for both programming and evaluation sets you up for success. Even though it can feel tempting to rush the process – or skip it all together – there is no substitute for a thoughtful planning process. Intentional program and evaluation planning, which might take as long as three to six months before program implementation begins, can involve any or all of the following:

  • data collection to determine what the scope of the issue to be addressed
  • setting short-, mid-, and long-term goals,
  • identifying the best population(s) to reach to address the problem
  • identifying and building the program components,
  • building community buy-in, and
  • designing evaluation systems for the program.

Evaluation cartoonAs you move along in planning your evaluation, even before you implement anything, you might notice that aspects of your program plan need to be refined or re-thought. This is one of the ways evaluation can help your work even before you begin collecting data! You will also need to determine whether or not your program is ready to be evaluated (i.e., is it evaluable?). If you do not have a strong program theory or do not have buy-in for evaluation or agreement about program goals among critical stakeholders, it might not yet be appropriate to conduct an evaluation (Davies, 2015). You also need to determine what type of evaluation is most appropriate for it.

Sometimes, however, we find ourselves in situations where we have a program or initiative already in progress and realize “Oops! I totally forgot about an evaluation.” If that’s the case, don’t despair. You will still need to go through all of the usual steps for planning an evaluation, but you might have to jump backward a few steps to get caught up and use a few different methods (e.g., a retrospective pre-test) when you begin collecting data so that you can do things like establishing a retrospective baseline.

How do we start?

Almost all forms of evaluation start with the need to describe what will be evaluated, also known as an evaluand. For sexual violence preventionists, this will be some aspect of your prevention programming or perhaps your entire comprehensive program. Describing your program or initiative helps you identify if all of the components are in place and if everything is set up to progress in a meaningful and useful way. It will also help you determine what type of evaluation your program warrants.

If you have pre-determined outcomes, it can also help you make sure that your outcomes and your intervention are strongly related to each other and that your outcomes progress in a logical way. This is also the point at which you can check to make sure the changes you seek are connected to your initiatives through theory or previous research.

A description of the initiative/s to be implemented and evaluated will most likely include the following components:

  • A description of the issue or problem you are addressing
  • An overall vision for what could be different if you address the issue
  • The specific changes you hope your initiatives will help create in the participants, the community, or any other systems
  • Tangible supports for program implementation like program budgets, in-kind resources, staff time, and so on
  • People you intend to reach directly with your initiative
  • Each aspect of your intervention that will be implemented
  • Specific steps you will take as part of program implementation
  • Any contextual issues that might impact your efforts, either positively or negatively

Several models and processes exist to assist in describing a program; the most well-known model in the nonprofit world is probably the logic model.

Many people cringe when they hear the term “logic model,” because this tool is much used and often misunderstood and misapplied. Logic models are most appropriate for initiatives that are relatively clear-cut; that is, the initiative should have a clear, research- or theory-based line of reasoning that shows why you have reason to believe that your intervention will lead to your desired outcomes.

Want to see how we learned to love the logic model? Take a look at this brief video for some inspiration:

 

Look back to the list of components of a program description above. You’ll notice they coincide with logic models’ components:

Problem statement

A description of the issue or problem you are addressing

Vision or Impact

An overall vision for what could be different if you address the issue

Outcomes

(Usually broken out into short-term, mid-term, and long-term)

The specific changes you hope your initiatives will help create in the participants, the community, or any other systems

Resources/Input

Tangible supports for program implementation like program budgets, in-kind resources, staff time, and so on

Outputs

Each aspect of your intervention that will be implemented (e.g., how many media ads you intend to run or how many sessions you will meet with participants)

People you intend to reach directly with your initiative

Activities

Specific steps you will take as part of program implementation (e.g., implementing a curriculum, using a social norms campaign)

External Factors

Any contextual issues that might impact your efforts, either positively or negatively

Logic model cartoonA logic model is a picture of the intersection between your ideal and realistic goals, which is to say that it represents something achievable if you implement what you intend to implement. Generally, the reality of program implementation means that one or more aspects of your plan shift during implementation.

If you are working on an innovative initiative or an initiative without pre-determined outcomes, this model will be of far less utility to you than to someone who is working on an initiative with clear activities and easily-determined, theory-driven or theory-informed outcomes.

More than that, however, linear logic models are not exciting nor are they inspiring, and the process of building them can seem dull, too. This format often feels minimally useful to program staff, if at all.

But there is no reason for your program description to necessarily end up in a linear, logic model format.

LightbulbInspiration

Check out this animated program logic video (Tayawa, 2015). Can you identify the components named above?

So, if you need to describe your program – or even if you are required to create a logic model – consider taking an alternate route. Get a group of stakeholders together and use the Program Model Development resource to guide your process. Be creative – map your thinking until you have both an image and words that represents your vision.

If your work is centered on the needs of youth, you might be interested in this NSVRC podcast discussing Vermont’s Askable Adult prevention campaign to learn how research informed their process, the components of the campaign, and how they evaluated their program.

 

Instructor showing resources for further learningLogic Model Resources

If you’re not familiar with logic models, check out these handy resources that offer guidance on developing and using logic models. These resources all primarily focus on a traditional, linear logic modeling process, but they offer good insight into that process. You can transfer what you learn into a more creative process if that appeals to you.

Developing a Logic Model or Theory of Change (Online Resource) This section of the Community Toolbox provides a useful overview of logic models.

Logic Model Worksheet (PDF, 1 page) This worksheet from VetoViolence, provides a simple fillable form to save and print your Logic Model. Control and Prevention.

Logic Model Workbook (PDF, 25 pages) This workbook from Innovation Network, Inc. walks you through each component of a logic model and includes templates and worksheets to help you develop your own model.

Logic Model Development Guide (PDF, 71 pages) This guide from the W.K. Kellogg Foundation provides extensive guidance on logic model development and using logic models for evaluation. It includes exercises to help you in your development.

Using Logic Models for Planning Primary Prevention Programs (Online Presentation, 26:52 minutes) This presentation describes the value of logic models in planning a violence against women primary prevention effort. It starts by looking at how logic models build on existing strengths, and when to use a logic model. The presentation then reviews logic model basics, explaining how logic models are a simple series of questions and exploring the steps in creating logic models.

References

Davies, R. (2015). Evaluability assessment. Retrieved from BetterEvaluation: http://betterevaluation.org/themes/evaluability_assessment


Outcomes are often a critical part of program development and evaluation. There are evaluation models that don’t require pre-determined outcomes (goal-free evaluation, for example) and innovative program development models often do not involve pre-establishing specific outcomes and rather look for emergent outcomes. However, most of us will be involved in developing and implementing programs and evaluations that require some level of specificity around outcomes or what we hope to achieve with our efforts.

We are all working to end sexual violence, but what will it take to get there? What are the short-term changes that will serve as signposts that we are on our way to that bigger vision? Those questions point to the outcomes we need to work on. Notice that these questions don’t ask what we need to do to get there but rather what we need to change.

Developing good and meaningful outcomes takes some practice.

On the simplest level, the outcome answers the question: what do we hope will be different in the world as a (partial) result of our efforts?

These changes might be in various domains:

  • Community and social norms
  • School or community climate
  • Individual attitude, beliefs, or behaviors
  • Relationship dynamics
  • Organizational operations and practices

In order to be measurable your outcome should include a clear direction of change. Usually that’s indicated by either the word increase or the word decrease but you might also have outcomes that seek to improve or maintain a condition.

When it comes time to measure your progress toward your outcome, you’ll have to ask yourself a different question. How will you know if that outcome has been met? What will be different? This should give you more specific indicators of the change, and those indicators will drive outcome-related data collection.

When you are ready to start writing your own outcomes, check out the Outcomes and Indicators worksheet.

Evaluation Purpose

Why are you evaluating your program? You might evaluate your program for any combination of the following reasons:

  • To prove that it’s working
  • To monitor its implementation
  • To improve it
  • To determine if you’re reaching the right people
  • Because a funder said you have to

You need to be clear on your purpose early on, because the purpose of your evaluation will guide your evaluation questions, which will then guide the type of evaluation and methods you will employ. These different purposes also require different levels of rigor in your evaluative processes. For example, proving that your intervention is creating change is a very high level of evaluation that requires establishing a cause-and-effect relationship between your intervention and measurable change.

When designing questions and considering evaluation purposes, the following distinctions should be kept in mind:

Formative, Summative, or Developmental Focus

A formative evaluation focuses on improving and tweaking an intervention, generally during the initial stages of implementation (that is, while it’s in formation). This might be used for new programs in their initial implementation or for existing programs that are being adapted for new populations or environments (Patton, 2014).

Summative evaluation, is about making value judgments about the program and its impacts. Generally a summative evaluation happens after a program has been improved through formative evaluation (Patton, 2014).

Developmental evaluation, focuses on supporting innovation for initiatives implemented in complex environments that must be responsive and dynamic as a result of that complexity (Patton, 2014).

Process vs. Outcome Evaluation

Process evaluation focuses on the aspects of implementing the program such as how the initiative was implemented, who was reached, etc. (Sexual Abuse and Mental Health Services Administration [SAMHSA], 2016). Outcome evaluation focuses on the results of the intervention like what changed for participants or the community as result. These are used in tandem with each other (SAMHSA, 2016).

Evaluation Questions

Once you’ve got a solid description of your program as you plan to implement it, you need to ask some evaluative questions about your program. These questions will guide the design of your evaluation and help you compare real-world implementation to your plan. The specific questions you ask will depend on a variety of factors, including the nature of your initiative, requirements from funders, resources available for evaluation, and the purpose of your evaluation.

People often start evaluative processes to answer the following two questions:

  • Did we implement the program as intended?
  • Did we achieve the goals/outcomes we set?

However, taking into account that evaluation also seeks to get at the meaning and value of the intervention and its results, evaluation can also seek to answer questions like these:

  • Are we reaching the people who are most in need of our intervention?
  • How do the program participants view the change they experience as a result of the program/initiative?
  • When are adaptations made to the implementation? What factors influence the adaptations? What are the impacts of those adaptations?
  • What is the relationship between facilitation skills of the person implementing the program and participant experience in the program/program outcomes?
  • What unintended outcomes emerge as a result of program adaptations/program implementation at all?

Consider this as an opportunity for your evaluation team to openly brainstorm evaluation questions related to your program or initiative. You will not answer all of the questions you come up with when you brainstorm, but a brainstorming session allows you to consider the vast possibilities that will then need to be narrowed down. Choose questions that you have the resources to answer and that will yield data you are willing and able to take action on. Use this discussion guide during your brainstorm. Also consider the importance of developing intermediate outcomes. If you are specifically looking to evaluate policy activities, this worksheet can help your team determine key indicators.

 

clipboard with pencil

Tools for Implementation

 

Take a journey through the CDC's Sexual Violence Indicators Guide and Database to explore how to:

  • Identify potential indicators and explore direct links to publicly available data sources 
  • Assess the fit of potential indicators
  • Create a plan to collect, analyze, and use indicator data

Indicator Selector Tool (PDF, 4 pages). This tool from the CDC helps those working in violence prevention to identify indicators that are appropriate for your specific evaluation.

Technical Assistance Guide and Resource Kit for Primary Prevention and Evaluation (PDF, 253 pages ) Developed by Stephanie Townsend, PhD for the Pennsylvania Coalition Against Rape, this manual is intended to support prevention educators in building upon what they are already doing to evaluate their programs.  

Measures Database (PDF, 3 pages) This database maintained by the Wisconsin Coalition Against Sexual Assault (WCASA) includes resources where you can find free measures, scales, or surveys specific to sexual assault prevention work. Some measures/scales are general examples and others are "standardized measures". Many examples are provided; there are pros and cons to each measure and WCASA does not endorse any specific options. Please contact NSVRC at prevention@nsvrc.org for assistance in identifying appropriate measures.

Evidence-Based Measures of Bystander Action to Prevent Sexual Abuse and Intimate Partner Abuse Resources for Practitioners (PDF, 31 pages) This document is a compendium of measures of bystander attitudes and behaviors developed by the Prevention Innovations Research Center. Some of the versions of the measures have been researched more thoroughly in terms of psychometric properties than others. Please see the citations provided for articles that describe the versions of our measures that have been published. See also, Evidence-Based Measures of Bystander Action to Prevent Sexual Abuse and Intimate Partner Violence: Resources for Practitioners
(Short Measures) (PDF, 22 pages) which provides administrators of prevention programs with shortened, practice-friendly versions of common outcome measures related to sexual abuse and intimate partner violence. These measures have been analyzed to develop a pool of scales that are concise, valid, and reliable.

picture of a lightbulb If you are thinking about measuring bystander intervention check out this short video where NSVRC staff talk with Rose Hennessy about some common challenges and ways to address those challenges.

References

Patton, M. Q. (2014). Evaluation flash cards: Embedding evaluative thinking in organizational culture. Retrieved from Indiana University, Indiana Prevention Resource Center: https://eca.state.gov/files/bureau/obf_flashcards_201402.pdf  

Substance Abuse and Mental Health Services Administration (2016). Process and outcomes evaluation. Retrieved from www.samhsa.gov/capt/applying-strategic-prevention-framework/step5-evaluation/process-outcomes-evaluation


Evaluation design answers an important set of questions about how the evaluation will roll out, specifically it answers particular questions related to data collection:

  • What is the context of your evaluation?
  • When will you collect it?
  • From whom?
  • How?

What is the context?

Before planning for data collection, analysis and use, you should answer some additional questions about the context of your evaluation.

  • What external factors impact the way your evaluation will be carried out or the way data will be (or won’t be) used?
  • What resources – money, time, expertise, etc. – are available to support evaluation efforts?
  • What characteristics of your target audience (when you have one identified) might impact the evaluation? Consider literacy levels, disability status, age. For example, if you are working in a school setting, do you need to get school administration to approve any evaluation tools you might use? Or only written tools? Do they require parental consent? If so, is it passive or active?
  • What funder restraints or guidelines do you need to follow or meet?

When?

Evaluation cartoonHow often do you need to collect data in order to tell a meaningful and accurate story about your efforts? The cartoon points out a clear problem with collecting data too infrequently. Collecting data too infrequently yields answers to your questions that are insufficient or not meaningful. On the other hand, you don’t want to collect so much data that it becomes too cumbersome to analyze and use it all.

The timing of data collection depends on many factors including the evaluation question you’re answering, the type of intervention you’re evaluating, the budget you have allocated for the evaluation.

Nature of the Intervention

Just by way of example, consider the chart below that depicts three different interventions that you might need to evaluate. These might benefit from or even demand different data collection protocols.

Intervention A is a time-bound program. At the time during which you are planning the evaluation, the program has not yet started but it will have a starting point and an ending point.

Intervention B is one where the intervention has already started before an evaluation plan was in place. It still has a definite end point.

Intervention C has a definite starting point but is intended to continue indefinitely. That is, the program has no pre-determined ending point.

Intervention A/B/C timeline graphicSee below for more information on the following three options for data collection:

Pre/Post

The standard practice for preventionists for some time has been to conduct evaluations using what is called a pre- and post-test design which means they collect data before they implement their prevention programming and again after they implement it. They then gauge their success by comparing scores at the end of the programming to the scores at the beginning. While this might be the most used design for collecting data, it is not necessarily the ideal design for all interventions or program components nor is it ideal for all evaluation questions.

First, it’s helpful to know that this design applies to a variety of data collection methods, not just the written surveys that are frequently referred to as pre-tests and post-tests.

The following case study highlights one way that the pre- and post-test model is often implemented in prevention work.

Consider the following:

teal file folder

The following case study is a combination of multiple real case studies.

Monica from the local rape crisis center plans to implement a nine-session prevention program in a local high school. She will be working with 25 freshmen over the nine weeks of the program. On the day of her first session, Monica walks into the classroom, introduces herself, and hands out 25 copies of a 10-question pre-test. She instructs the participants to complete them in silence and hand them back to her in 5 minutes. At the conclusion of 5 minutes, Monica collects the pre-tests and puts them in a folder in her bag. At that time she talks to the group about how they will spend the next nine weeks together, does a few icebreakers to get to know the students, and introduces them to the concept of gender socialization.

Fast forwarding to session 9, Monica does a closing circle with the group and, during the last 5 minutes of the class, hands out 25 copies of the post-test. She again instructs the group to complete the test in silence and on their own. When she collects them, she thanks them for their participation. The bell rings at that point, and the students move on to their next class.

Arriving back at her office, Monica pulls out the pre- and post-tests in order to enter the data into her spreadsheet. Several of the tests are difficult to enter because students skipped questions or circled multiple options on each question or did not complete them at all. She decides to just keep all of the ones that are completed correctly, enters them, calculates the differences from the pre-test to the post-tests, enters the numbers in her monthly report, and puts the tests in her file cabinet.

So, what’s nice about this design?

For an intervention like Intervention A the pre-post design establishes a time-bound baseline or starting point of behaviors, attitudes, norms, etc. before you start your work and a nice comparison point at the end of your work.

Since there are only two data collection points, it can be relatively easy and cost effective to implement depending on the tools and processes you use and the type of data you’re collecting. If you were to do, a pre/post design that involved a focus group before and after a program, that probably would be more time and resource-intensive than using a pre-test and post-test.

You can add on to the typical pre/post design with a follow-up data collection point at a date well after the completion of the program to see if the impact of your efforts varies with the passage of time. This design is preferable to a standard pre- and post-design for initiatives like sexual violence prevention work that hope to create stable changes in attitudes, behaviors, skills, and norms.

What are the pitfalls or drawbacks of the pre/post design?

For many of us, this process represents precisely how we were trained to “evaluate” our work. Moving through the story from the beginning, several methodological and ethical issues present themselves if we look closely:

Prevention work is about relationships, and it’s difficult to build a relationship with people when the first thing we do is walk into the room and hand them a test. This is especially true when we are working with groups who are used to being tested or studied or having data collected about them, as is sometimes the case for marginalized populations in our society. Walking in and immediately offering them a test does not set the tone that there is any reason to expect a different kind of relationship, certainly not one that is more equitable. A barrier has already been established.

Young people in many school systems are rigorously and frequently tested. When we go into their spaces, we often also show up with a “test” in hand. Prevention workers report that these young people often don’t take pre- and post-tests seriously – circling many responses or no responses – resulting in unusable data. They have no reason to buy in to the evaluation process if they don’t yet understand the meaning or utility of the data.

Even if the participants thoughtfully complete both the pre- and post-tests, the data must be used in order for that to matter and for the data collection to be ethical. When we collect data related to sexual violence prevention efforts, we often ask about sensitive issues such people’s relationships, sense of self, and experiences with oppression and violence. Answering these questions can be painful and emotional and certainly requires people to give of themselves in the process. As a result, we owe it to them to use the information they have given us in the most meaningful way possible, and, if we discover the data is difficult to use in a meaningful way, we need to adjust accordingly rather than continuing to collect the same kinds of data in the same ways.

Collecting data only before and after an intervention allows no possibility for mid-course corrections. If you’re not on target to reach your intended outcomes, you won’t know that until the program is completed. Your ability to make claims about your contribution to the outcomes represented on the post-test is limited since you don’t have any data in the middle to show trends of change or to highlight the possible influence of other factors on the outcomes of interest.

For some workarounds for this and other data collection issues, see the handout on Data Collection Hacks. To see an example of how the pre- and post-test model often shows up in our prevention work, check out the case study below.

Retrospective Pre/Post

A retrospective pre/post design involves collecting data only at the end of the program, but you collect data related to both the state of things before the program and after the program at that time.

What’s nice about this design?

This design is useful in a variety of situations. For example, in intervention B where the intervention has already started before an evaluation has been designed, there’s no chance to get a true baseline at the beginning of the programming. Such a situation is not ideal, but, in certain instances, this will be preferable to not collecting any data at all.

When collecting data about changes experienced by individuals, a retrospective design is especially useful for gauging a person’s sense of how they’ve shifted (Hill & Betz, 2005) over the course of the intervention and for measuring actual change on domains that would have been difficult for people to accurately gauge at the beginning of a program (Nimon, Zigarmi, & Allen, 2011). Consider the following example:

teal file folderLet’s pretend I am in participating in your education-based prevention program, and you hand me a survey on the first day of the program.

One of the items on it reads:

I have sexist attitudes. OR I engage in sexist behaviors.*

I’m given options to indicate how often I engage in either of those. (This is called a frequency scale.) Assuming I have a general idea what the term “sexist” means, I probably also have a sense that engaging in sexist behaviors or having sexist attitudes is a bad thing. I know that a good person would never or minimally have such attitudes or behaviors, and that means I know the “right” answer to this. I circle the response that either says “never” or “very rarely” because I want to give you the answer you want. (When I select an answer based on what I think you want to hear, that’s called social desirability bias.) Additionally, I might genuinely believe that I engage in very few or no sexist behaviors or harbor very few or no sexist attitudes. In that case, I would also endorse a low frequency for this behavior.

After this survey, we spend six to nine weeks together during which time you teach me about gender-role socialization and gender inequality, including the many attitudes and behaviors that constitute sexism. During this time, I have many “aha” moments where I realize that attitudes and behaviors I’d considered harmless before are actually harmful toward women and driven by sexism. When you give me the post-test, I might be in a state of examining how I have behaviors and attitudes I need to change but that I have not yet changed. At that point, I am likely to endorse a higher frequency than when I completed the pre-test, and this makes it look like your efforts negatively impacted me. (When a participant learns more about a certain concept during the intervention and that impacts the way they respond on the post-test as compared to the pre-test, that’s called response-shift bias.) On the other hand, I might have already started to shift my behaviors and attitudes, in which case I might endorse approximately the same frequency that I did on the pretest. This will make it look like your program didn’t have any impact on me. (When responses on a pre-test are so high as to make it difficult or impossible to see change at the post-test, this is called a ceiling effect.)

However, if you asked me to reflect on my growth over the program and complete an item like the one above about how I was when I came into the program versus how I am at the end of the program, I can indicate the change that occurred for me because I have all the information I need to answer honestly. That is, I now have a better idea of what sexism is and how it manifests in attitudes and behaviors. In the beginning, I didn’t know what I didn’t know.

* These questionnaire items are here only for the sake of an example so that issues various types of biases can be addressed. Questions such as these have limited utility in the real world because they contain terms that people might not know and are overly general. (However, they might be useful for gauging exactly what they are asking – a person’s sense of their own sexist attitudes or behaviors.) For information on good survey questions, check out this short and simple guide by Harvard University Program on Survey Research (2017).

In addition to working well for issues like the one described above, the retrospective design can also be good for measuring someone’s likelihood of engaging in certain pro-social behaviors. We can see this with an opposite example to the one given where you might ask about the likelihood that someone will interrupt bullying or sexism. Most of us want to believe that we will do these things.  If you have a prevention program that’s focused in increasing skills and motivation to intervene in these situations, it might be best to ask this retrospectively so that participants can indicate more accurately whether or not that has changed for them. That is, if you can’t measure behavior directly and need to measure behavioral intent, consider this design over the standard pre/post design.

Retrospective measures also make it easier for you to track data from individual participants. With measures administered at the beginning and end of the program, you can only compare individuals if you assign them a unique identifying number or have them assign one to themselves. Participants often forget their numbers over the course of an intervention, and you can’t keep track of them without denying confidentiality of data. This leads many people to just lump all of the data from all participants in a group together to compare an aggregated score at pre-and post, but your argument about having made a change can be strengthened by being able to disaggregate and look at the how individuals changed.

Practically, retrospective measures allow you to make adjustments to your measurement instrument to address the intervention as it was implemented rather than as you expected to implement it (Pratt, McGuigan, & Katzev, 2000). This can be helpful if, for some reason, you do not implement all sessions of a curriculum. Finally, some preventionists report that retrospective instruments are preferable to using pre- and post-tests because they do not get in the way of building rapport when an initiative begins.

What are the pitfalls or drawbacks of the retrospective design?

This design shares drawbacks with the pre- and post-test design. It is not a good tool for directly assessing changes in knowledge. Social desirability bias continues to be an issue, even in the example above, and some studies even suggest that social desirability bias might be more significant using a retrospective design (Hill & Betz, 2005).

Integrated & Ongoing Data Collection

In addition to collecting data before and after implementation, you can also collect data (of various sorts) while the intervention is underway. Sometimes you might use the same instrument or data collection method to do this (e.g., questionnaires), but you can also use this as a way to collect different kinds of data. Collecting various kinds of data is one form of triangulation (Patton, 2014) and is good practice for getting a deeper understanding of an issue and also for making sure you have valid data. Also, different kinds of data serve different purposes.

What are the benefits of collecting data during an intervention?

If you collect data during an intervention, you have a chance to see if you’re on track toward meeting your intended outcomes or if you need to make midcourse adjustments to create the kind of change you seek to create. This type of data can also be collected to provide quick feedback to people who are implementing the program so that they can continuously improve their efforts throughout implementation. For example, some preventionists have teachers or other observers complete observations about the way the preventionist facilitates activities and conversations and provide them with improvement-oriented feedback based on the observations.

Certain data collection methods can be used unobtrusively during the intervention to collect data without interrupting the flow of the intervention. Other data collection methods can become part of the intervention and serve the purpose of furthering your work toward your outcomes while also giving you information about how far you’ve come at a certain point. Some forms of observational data collection and activity-based evaluation are good examples of this type. Skye Kantola discusses the ways that activity-based assessment can be used as a way to assess community needs and measure community-building methods in this NSVRC podcast.

Collecting data during the intervention can offer seamless opportunities for participatory data collection. One way participants can be involved in data collection is by observing peers within their classrooms or schools or observing the larger community around them, depending on the type of intervention and evaluation questions.

graphic of lightbulbWatch this NSVRC Mapping Evaluation video podcast with Maya Pilgrim to see how a facilitator checklist can help preventionists identify the things that work during a prevention program

What are the drawbacks of collecting data during an intervention?

Ongoing data collection can be a person-intensive endeavor requiring the preventionist, participants, and others to frequently or consistently engage in data collection. The data then needs to be analyzed and interpreted. Additionally, the data can be time-sensitive. That is, for the data to be maximally useful, it needs to be analyzed and processed in time for it to facilitate a course correction if one is needed.

From Whom?

From whom do you need to collect data in order to answer your evaluation questions with a relative degree of certainty?

You do not need to collect data from each and every person who participates in your programming. Right now, you might be thinking, “What? I’ve always tried to get every single person to fill out a survey or take pre-and post-test!” If you’re thinking this, you might also be one of those people with file cabinets full of data that you’ve never quite figured out how to use – or maybe you’ve never quite figured out how to find the time to enter all of the data into a spreadsheet so that you can run analyses on it.

Sampling is the process of strategically selecting a subgroup of participants (or another population of interest) from whom to collect data in response to your evaluation questions (Taylor-Powell, 1998). The general practice of sampling involves trying to collect data from a subgroup that somehow represents the overall group. For example, if you’re working with four 11th grade health classes in the same high school, and they represent similar distribution of the school population, you might decide to collect data from only two of the classes. If, however, one of those classes consisted entirely of athletes, it would not be representative of the 4 classes as a whole. 

Other sampling practices might involve deliberately sampling particular groups of people. For example, an evaluator might oversample a particular subpopulation if they are not well-represented in the overall group, but data about the population stands to tell a particularly important story. This practice can help amplify the voices and experiences of marginalized groups.

Like many other aspects of evaluation, a variety of factors influence decisions about sampling (Taylor-Powell, 1998), including

  • Resources – From how many people can you reasonably collect data? How much data do you have the time and person-power to input, clean, analyze, interpret, and use?
  • Questions – What do your evaluation questions indicate about what you need to know and about or from whom you need to know it? What kind of claims are you seeking to make about your initiatives? For example, if you’re trying to make strong claims about the contribution of your intervention to particular outcomes, you might need to use more rigorous sampling procedures and seek a larger sample than if you’re evaluating more for program improvement.
  • Philosophy/Approach – Different philosophies about and approaches to evaluation suggest different approaches to sampling.

Tip

When making decision about sampling, keep in mind that data collection stands to impact the participants and the intervention in ways that could either contribute to or detract from the intended outcomes. For example, if you conduct focus groups at any point in your program, these focus groups might count as an additional dose of learning that can assist with integration of concepts from the intervention.

graphic of bookResources

Decision Tree for Selecting Sampling Methods (PDF 1 page) The
Ohio Primary Prevention of Intimate Partner Violence & Sexual Violence Empowerment Evaluation Toolkit includes a simple decision tree to help with selecting sampling methods.

How?

When you consider how you will collect data to answer your evaluation questions, you might first think about questionnaires or other written instruments for data collection. Written instruments are just one of the many options for data collection, each of which has benefits, drawbacks, and appropriate applications. Method selection needs to be driven by your evaluation questions, resource availability, and expertise among your evaluation team.

What kind of data do you need?

Evaluation cartoonBefore making decisions about the tools you will use to collect data, it will be helpful to understand the difference between qualitative and quantitative data.

Qualitative data is descriptive and include words, images, observations, existing documents, and other non-numerical data.

Example: If you hold a focus group, the data you are collecting consist of the responses participants give to questions you ask. These data are in the form of words. If you conduct a PhotoVoice project for data collection, the photos taken by participants, along with the words they use to describe those photos, are the data you collect.

Quantitative data consists of numbers themselves or of numbers that represent discrete concepts.

Example: If you count how many people attend various events, the numbers of those counts are quantitative data. When you use questionnaires that present questions or statements with scales (for example, a scale that assesses the level to which people agree with a certain statement), the items on the scale usually correspond to numbers and are quantitative data.

Generally, evaluators see quantitative data as being somewhat less time consuming to collect and as offering more opportunity to generalize findings since data can be collected from far more people. Qualitative data, then, offer a rich picture and context of how people and communities change. If qualitative data are collected from only a small or select group of people, the ability to make statements about how much that information applies to other people is limited. Keep in mind that these are general views and not the only views.

“The evaluation ideal is: No numbers without stories; no stories without numbers.” (Patton, 2014, p. 13)

Evaluation cartoonAs the quote above suggests, a mixed- method (both quantitative and qualitative) approach is seen by some as the best way to get a good picture of what is happening with a program or initiative. For example, you might use a questionnaire to collect data about participants’ intent to engage in bystander behaviors. For additional context you might then hold a focus group to find out more about why people did or did not increase their intent to intervene over the course of the program.

Several sources of data to answer evaluative questions exist. The trick is to figure out the best way to collect data that will answer your specific questions. The data you gather can be collected in ways that are more or less intrusive. That is, you might collect data in ways that are very obvious to program participants (e.g., having them complete surveys) or that are less obvious to program participants (e.g., reviewing existing records). Data collection methods that are less intrusive have less impact on the program participants and context.

What kind of data will be credible for you?

If you think about the data you collect as helping you tell a story about your work, you need to consider which kinds of data will help you tell a story that will be regarded as credible by the various people to whom you will tell the story. What kind of data will help tell a credible and meaningful story to the participants who are part of your program? Will that data be the same or different than what is valued by your funders? How will you reconcile any differences between the two?

Planning for data collection can be both an exciting and daunting project. As curious human beings, we often find ourselves with more questions than we can reasonably answer in the course of an evaluation. Also, when we start to think about data collection, we easily fall into the trap of following our curiosity rather than following our questions. Remember: data collection serves a particular evaluation need and needs to have useful purpose.

What kind of data will answer your evaluation questions?

Keep in mind that you might need to collect multiple types of data to get a sufficient answer to your evaluation question, even if your evaluation question focuses primarily on whether or not you met your stated outcomes. Considering your evaluation questions, ask yourself, “How will I know?” For example, if your question focuses on “Did we meet our outcomes?” You need to ask “How will I know if our outcomes are met?” Think about what will be different in what you or others see, hear, etc.

The same goes for the questions that are not tied to the achievement of your outcome. So, if your question is, “How meaningful were the outcomes to program participants?” Then you need to ask, “How will I know if they were meaningful? How will I know if they were only a little meaningful versus if they were significantly meaningful?”

Generally, we consider the answers to these questions to be indicators. Indicators are more specific than outcomes and more directly measurable. In order to fully answer evaluation questions, you might need to collect data about multiple indicators, which might show up through various kinds of data. When you start brainstorming indicators, you might discover many indicators will help answer your question. Choose ones that are meaningful, easy to collect, and closest to your questions.

You can use the Mapping Data Sources to Evaluation Questions handout to help guide and keep track of your thinking.

Evaluation graphic

Example 1

If you are implementing a bystander intervention program, observational methods can help you see if participants are using bystander intervention skills, how well they are using them, if they are using them in the appropriate moments, and whether or not those skills are proving effective. (For specific ideas about how to do this, check out Appendix C of Activity-Based Assessment: Integrating Evaluation into Prevention Curricula [Curtis & Kukké, 2014].) To gather information about social norms change, you might additionally look at how members of the community respond to the bystander when they intervene. For example, do people ignore the intervention or do others join in to help? The latter would be suggestive of a norm in favor of bystander intervention. 

Example 2

If you are trying to build healthy relationship skills among a group of people, over time you can track their use of those skills during the program sessions.

Observational Data

Overview

Observational data come from directly observing behaviors of program participants or other members of a target audience or community. Collecting observational data is appropriate for understanding how people behave in their environments and how or if they enact new skills that they’ve learned.

Process

Observational data collection can be time-consuming if observations are happening independent of an intervention, like example 1 above. However, observational methods can also be integrated into an intervention so that data collection happens at the same time, as seen in the second example. Ideally, more than one person will collect the data. Some preventionists train classroom teachers to assist with observational data collection either during the sessions the preventionist facilitates, between sessions, or after the completion of the program.

Data are usually collected using pre-determined criteria so that all data collectors know what to look for and can track similar behaviors. Data collection sheets (also called rubrics) might be organized to collect

  • whether or not a behavior was observed,
  • how often a behavior was observed,
  • how many people engaged in a behavior, or
  • how well people implemented the behavior (Curtis & Kukké, 2014).

Although this can take a lot of preparation in the beginning to design collection sheets and train observers, the data it provides can be uniquely useful for initiatives that aim to change people’s behaviors.

Participatory Opportunities

Observational data collection offers a great opportunity for participatory program evaluations. Ideally, participants will help determine what types of observations to conduct, when, and of whom. Then, they can be trained to participate in the collection of the data.

Focus Groups/Interviews

Overview

Focus groups and interviews provide opportunities for you to directly interact with individuals to get their thoughts, feelings, and reactions to a series of questions related to your evaluation question. These are also often used in needs assessments because they can provide detailed perspective and context about the strengths and needs of a given community. Although time consuming, the richness of the data you receive from focus groups or interviews can be critical to helping you more fully understand data you have collected through other means.

Process

To conduct focus groups or interviews, you need to design questions and decide who will either be interviewed or invited to participate in the focus group. Both of these methods require careful note taking or audiotaping and transcription, so you will need to build in time and resources for that. If you go the note-taking route, it is best to have one person facilitating and one person taking notes. If you want to audio-tape interviews or focus groups, make sure to get everyone’s permission to do so and let them know how the recordings will be used.

Participatory Options

Invite participants to help design questions or generate ideas about who should be interviewed or included in the focus group. You can also train participants to conduct the interviews or facilitate focus groups. Methods like Most Significant Change can be implemented in a participatory way by having participants interview each other. The developers of this method have put together a 10-step guide. You can also check out this video podcast with Dee Ross-Reed about how key informant interviews can help uncover program successes and ways evaluation can be modified to better meet the needs of the community. 

Existing Data/Documents

Overview

Some of the data you need to help answer your evaluation questions exists in materials put together by external sources. This might include newspapers (including school newspapers), documents from other nonprofits, data collected by state agencies, and so on. If other entities are collecting or providing data you need, there’s no reason to collect it yourself as long as you can get enough information to determine how valid or reliable the data might be and how applicable it is to your specific question. Additionally, documents produced by or about given communities (e.g., in newspapers) can give you a sense of the community values and norms and how they shift over time.

Process

The process for working with existing data or documents is relatively straightforward. To use existing data or documents, you need to determine what you are looking for and how the existing data will supplement any other data collection you do. Determine in advance what you might be looking for in the existing data that you’re using. This is another place where a rubric might be appropriate. You then need to collect whatever the specific data sources will be, whether that’s community newspapers, state survey data, school records, or another source.

Participatory Options

Participatory opportunities include the process of collecting the data artifacts and also the process of pulling and analyzing the data found in them. For example, you might involve students in collecting archival information to ascertain how gender norms have operated in their school over time; they could pull information from school newspapers and yearbooks.

Questionnaires

Overview

Questionnaires (which you might also think of as pre/post-tests or surveys) might be the most frequently used data collection method among preventionists. With questionnaires, you design questions that seek to assess particular constructs (e.g., adherence to rigid gender roles).

Process

After determining which constructs you want to measure with a questionnaire, you need to write questions that address that construct or use questions from existing measures. Ideally, questions will be piloted prior to use so that you can make sure the language in the questions makes sense to the target population and determine the extent to which the questions measure what you think they are measuring. You will also need to decide how you will administer the questionnaire. Options include

Participatory Options

Participants can be involved in helping develop questions, piloting measures, and even administering the questionnaire.

Creative Materials

Overview

Over the course of an intervention, participants or community members might produce a variety of artistic materials that can be used as data. For example, community members might construct a mural about what healthy communities look like or youth in a school-based program might be asked to take photos that represent healthy, respectful interactions both before and after an intervention.

Process

If you want to deliberately build in creative opportunities that are explicitly and deliberately evaluative, you will need to determine prompts for the creative expression that will inspire participants to produce materials that will have content relevant to your evaluation questions and indicators. For example, there’s a significant difference in asking people to take photos that represent what the program or initiative meant to them and asking them to take photos of what they think the impact of the program or initiative was on the people around them.

Participatory Options

Many creative methods are inherently more participatory than non-creative ones. For example, Photovoice involves participants in taking photos in response to a prompt, and then they are able to share both their photos and their thoughts/reasons for taking their photos. The Ohio Alliance to End Sexual Violence partnered with PhotovoiceWorldwide to created a toolkit for sexual violence prevention on the benefits of using photovoice and how to do with community safety in mind.

DATA SOURCE

PREVENTION EXAMPLES

OBSERVATIONAL DATA

Observational data come from directly observing behaviors of program participants or other members of a target audience or community.

  • Interventions in problematic behaviors and comments
  • Proactive modeling of healthy, positive, and safe behaviors

FOCUS GROUPS/INTERVIEWS

Focus groups and interviews are opportunities to get detailed descriptive data including people’s perceptions about their experiences in a program and reflections on how they have changed.

  • A focus group of community members can give you insight into the reach and impact of your community norms change campaign
  • Interviewing teachers might give you insight into how students’ attitudes and behaviors have shifted and also about norms change within a school.

EXISTING DATA/DOCUMENTS

Existing documents include materials or records that are collected external to your evaluation efforts but which you might be able to access to assist in answering your evaluation questions.

  • School records of sexual harassment or bullying
  • BRFSS/YRBFS
  • Reports or data from other nonprofit evaluations

QUESTIONNAIRES

Questionnaires include questions and other items to gauge people’s attitudes, behavioral intent, knowledge, and other constructs.

  • Attitudes Toward Women Scale
  • Bystander Efficacy Scale

CREATIVE MATERIALS

Artistic and creative products like drawings, photos, murals, journal entries, poetry, etc. are also sources of data for evaluation.

  • Student PhotoVoice projects in response to a prompt about healthy relationships. Photos are taken at the beginning and ending of the intervention to compare.

If you want to brainstorm options for your own data collection based on these categories, download the Identifying Data Options worksheet.

References

Curtis, M. J., & Kukké, S. (2014). Activity-based assessments: Integrating evaluation into prevention curricula. Retrieved from the Texas Association Against Sexual Assault: http://www.taasa.org/wp-content/uploads/2014/09/Activity-Based-Assessment-Toolkit-Final.pdf

Harvard University Program on Survey Research. (2017). Tip sheet on question wording. Retrieved from http://psr.iq.harvard.edu/files/psr/files/PSRQuestionnaireTipSheet_0.pdf  

Hill, L. G., & Betz, D. L. (2005). Revisiting the retrospective pretest. American Journal of Evaluation, 26, 501-517. doi:10.1177/1098214005281356

Nimon, K., Zigarmi, D., & Allen, J. (2011). Measures of program effectiveness based on retrospective pretest data: Are all created equal? American Journal of Evaluation, 32, 8-28. doi:10.1177/1098214010378354

Patton, M. Q. (2014). Evaluation flash cards: Embedding evaluative thinking in organizational culture. Retrieved from Indiana University, Indiana Prevention Resource Center: https://www.indianaevaluation.org/resources/Documents/Resources/MQP_OBT_flashcards_2017.pdf

Pratt, C. C., McGuigan, W. M., & Katzev, A. R. (2000). Measuring program outcomes: Using retrospective pretest methodology. American Journal of Evaluation, 21, 341-349. doi:10.1016/S1098-2140(00)00089-8


Once you’ve collected data, you need to turn the raw data into a form that is more useful for driving decision making. That means you need to analyze the data in some way. Qualitative and quantitative data are analyzed in different ways.

Qualitative Data

There are a variety of ways to analyze qualitative data, and the analysis chosen will depend on the type of data you have and the questions you seek to answer with the data. Although it might sound daunting, certain methods of analyzing qualitative data are relatively easy to learn and implement (for example, the method outlined by the National Sexual Assault Coalition Resource Sharing Project [RSP] and National Sexual Violence Resource Center [NSVRC] in 2014). Other methods require considerable time and training and would likely require the service of an outside evaluator.

One of the benefits of qualitative data is that it can essentially be quantified - that is, you can turn the descriptions into numbers. For example, you might identify the number of times people in a focus group mention that their behaviors changed as a result of your intervention. When you use rubrics or scoring tools for observational data collection, you are immediately quantifying your observations as opposed to recording them as descriptive events. This process can streamline data analysis by reducing the amount of time required and also making the process easier for those who will implement it (Curtis & Kukke, 2014).

teal file folderFor example, one preventionist in Texas shared that her evaluation includes collecting qualitative data that they score using a rubric with predetermined themes and buzzwords. This allows them to track for buzzwords and make quick determinations about the data based on domains of interest that are represented in the rubric. Since they are also the ones doing the scoring, they can see the rich data and use the individual comments from participants as context for the decisions they make based on the data.

When you quantify qualitative data, for example by counting the number of times an idea or concept appeared in the data, you lose some of the richness of the original data, but the resulting numbers can also be useful for telling the story of your work. If you quantify the data, consider keeping examples of the richer content (for example, compelling quotes or images) to help keep the numbers in context and support the point you are trying to make with your data.

Quantitative Data

Like qualitative data, there are a variety of ways to analyze quantitative data, and different methods are used for different kinds of quantitative data and for different kinds of insight into the data.

For many preventionists, the most used types of analyses will fall under the category of descriptive statistics. These analyses, as the name implies, describe the data. Descriptive statistics include

  • Frequencies – an indication of how often something occurred
  • Percentages – the percentage of an occurrence
  • Means, Modes, Medians – these are all measures of central tendency

Additional insight can be gleaned from inferential statistics which can do things like compare sets of data to see if there’s a significant difference between the two. This is useful for comparing pre- and post-test data, for example. These are slightly more complicated to conduct than are descriptive statistics, but with a little training, anyone can run these using Excel or other tools.

Data Cleaning

Quality data is key and having a data cleaning strategy is an important component of your data analysis process. Strategic Prevention Solutions has put together some tips to help you get started:

books

Resource

Primary Prevention and Evaluation Resource Kit: Analyzing Evaluation Data (PDF, 112 pages) This resource from the Pennsylvania Coalition Against Rape offers a robust exploration of how these various data analysis options apply to primary prevention work and walks through the process for using some of them.

Data Analysis Online Learning Course (Online Course, requires free account to log in) The Data Analysis Series consists of four courses designed to show users how to enter, analyze, and report on evaluation data captured from pre/post surveys. These courses contain sample data for practice, and users can pause, review, and revisit any portion of the courses.

References

Curtis, M. J., & Kukké, S. (2014). Activity-based assessments: Integrating evaluation into prevention curricula. Retrieved from the Texas             Association Against Sexual Assault: http://www.taasa.org/wp-content/uploads/2014/09/Activity-Based-Assessment-Toolkit-Final.pdf

National Sexual Assault Coalition Resource Sharing Project, & National Sexual Violence Resource Center. (2014). Listening to our                     communities: Guide on data analysis. Retrieved from http://www.nsvrc.org/sites/default/files/publications_nsvrc_guides_listening-to-our-communities_guide-for-data-analysis.pdf


Interpreting Data

When data are analyzed, they don’t automatically tell you a story or indicate how to act on them. In order to act on the data, you need to make meaning of the data through a process of interpretation. This is the point when you look at the analyzed data and say “so what?”

This step helps you determine potential explanations for why the data came out the way they did so that you know what actions to take as a result.

For example, what does it mean if there is an increase in bystander incidents from the beginning of your intervention to the end of your intervention? Or what might it mean if the incidents did not increase?

Involving program participants in interpretation of data can provide rich and critical insights. This involvement can be relatively informal. For example, in activity-based assessment, data are collected in various sessions of a curriculum-based intervention to gauge learning integration along the way. It’s encouraged to bring concerning (or “unsuccessful” data) in to the following session to facilitate dialogue about why the participants may not have integrated the learning in the way you expected (Curtis & Kukke, 2014). This gives indications about whether the struggle was related to the evaluation instrument itself, the curriculum content, the facilitation, or something happening with the participants.

More formal options for participatory data interpretation include facilitating meetings where preliminary analyses are shared with stakeholders, and they are invited to offer feedback, reflections, and additional questions (Pankaj, Welsh, & Ostenso, 2011).

For one example of how to implement this, see the guide to using Data Placemats and What? So what? Now what? sections of the Training and Capacity Building Activities guide.

booksResource

Participatory Analysis: Expanding Stakeholder Involvement in Evaluation (PDF, 9 pages) Published by the Innovation Network, this short guide offers advice, tools, and case studies related to involving stakeholders in data analysis.

Data Analysis: Analyze and Interpret (Online Course, free account required to log in) In part three of the NSVRC Data Analysis course, you will be able to identify types of data, analyze your data, and interpret your data with averages, changes over time, and differences between groups.

Using Data

Once you’ve analyzed and interpreted your data, it’s time to answer the question: Now what?

If you are engaged in participatory data interpretation, these questions might be answered in that process, and then your job is to make good on the changes.

The “now what?” phase can help you figure out what might need to be shifted about what you’re doing and how you’re doing it. When looking at your data, you will want to consider the following questions.

What do we need to adjust about the evaluation process or tools?

Sometimes you will discover that the evaluation process or tools you used did not give you sufficient information to make judgments about the intervention or its implementation, which means your primary point of action will be to make changes to the evaluation itself to yield better data in the future.

For example, you might discover, as some other preventionists have, that the young people you work with complete their surveys haphazardly, circle the spaces between answers and write snarky comments in the margins. Or you might hold a focus group and discover that none of the participants has much to say in response to the questions you asked them. Either of these situations could indicate a problem with the questions/items you’re using or the methods of survey administration and focus group facilitation themselves.

What do we need to adjust about the nuts and bolts of the intervention (i.e., the program components)?

Perhaps the data show you that particular aspects of your programming are less effective than other aspects. For example, one preventionist noted that the data she collected from program participants consistently showed that they seemed to be integrating messages about sexism more so than they were integrating lessons about racism. It was clear that something needed to be tweaked about the discussions related to racial justice to make them more relevant and compelling to the participants. 

What do we need to adjust about program implementation (e.g., the way it is facilitated, the skill-sets of the implementers)?

It also might be the case that the components of your program need very little tweaking while the implementation needs more tweaking. For example, maybe you are not reaching the right people or maybe the people doing your community organizing or program facilitation need additional skill-building to be more effective in their work.

Communicating About Your Evaluation

In addition to using data to make changes in the ways outlined above, you also need to communicate about your evaluation to your funders and community partners. This is part of accountability and also a way to celebrate your successes and help others learn from your work.

This communication might be relatively informal (a mid-evaluation update at a committee meeting) or might be more formal (a full evaluation report or presentation). Regardless of the occasion, the way you communicate about the program and its evaluation matters. Remember, this is your story to tell – make it compelling! Consider which angle of the story you want to tell and the purpose of telling your story. You might tell the story in slightly different ways to different audiences and to meet different purposes. For example, maybe your board of directors wants to see numbers, but your community partners would rather hear stories about your work.

Ways to Communicate about Your Data and Evaluation

Data Visualization

Data visualization (also called data viz, for short) is exactly what it sounds like, ways of presenting data visually. As a field of practice, data viz draws from scientific findings and best practices in the graphic design and communication fields to help create powerful, data-driven images. The ever-popular infographic is a data viz tool that allows you to highlight important points with meaningful images in a succinct, easy to read way. To learn the basics about data visualization check out this Self-Study Guide.

outline of lightbulbRead NSVRC's blog series about why visualizing data is vital to sexual violence prevention.

Why is visualizing data vital to sexual violence prevention? Part 1 with Renu (Re) Gupta, Interpersonal Violence Prevention Programs Coordinator at the Colorado Department of Public Health and Environment 

Why is visualizing data vital to sexual violence prevention? Part 2 with Erin Chambers, Visual Communications Designer from the Missouri Coalition Against Domestic & Sexual Violence (MOCADSV) 

Charts & Graphs

Charts and graphs are visual representations of your data that make it easier for people to understand what the data communicate. Charts and graphs can be made in many computer programs that you might have on hand, including PowerPoint and Excel. These standard graphs and charts might need to be re-designed a bit by you in order to maximize their readability and impact. People have written entire books about this issue. (Seriously! Check out this one, for example, if you really want to nerd out about this.) If you don’t have time to read a whole book but want some good tips on how to work with charts, check out the video inspiration below. Quantitative and qualitative data require different types of visualizations, and it is important to choose a type of visualization that is both appropriate for your data and that also clearly communicates the implications of the data. The default chart that your data processing software chooses might not be the best or most compelling option! Fortunately, there are guides for choosing the correct chart for both qualitative (Lyons & Evergreen, 2016) and quantitative (Gulbis, 2016) data that can help guide you through those decisions when it is time to make them. We highly recommend you run what you create through this data visualization checklist.  

Reports

Typically, people give a full rundown of their evaluation process and results in an evaluation report. These reports are shared with funders and other community partners. The reports are often long and contain more information than is useful to all interested parties, so some evaluation experts like Stephanie Evergreen recommend a 1-3-25 model that includes a 1-page handout of highlights, a 3-page summary, and a 25-page full report (Evergreen, 2015). Check out Stephanie’s Evaluation Report Layout Checklist; it will help you make sure your content and layout are maximized for easy reading and impact (Evergreen, 2013b).

If you are working with youth and need an interactive way to share data, take a look at Stephanie Evergreen's Data Fortune Teller tool and customizable form. 

Infographics

Infographics provide an opportunity for you to visually represent a variety of data points succinctly and powerfully. Generally, infographics consist of one page worth of data and information to communicate one or maybe two main points. Several online programs offer free or low-cost options for making infographics and include templates, images, charts, and options to upload or input data. Check out Piktochart and the infographic section of Animaker to see examples of what you can do. (Animaker lets you animate infographics to tell a more dynamic story!)

outline of lightbulbFurther Inspiration

Stephanie Evergreen’s presentation 8 Steps to Being a Data Presentation Rock Star is a fun way to learn the basics about communicating about data (Evergreen, 2013a). While the presentation primarily focuses on creating slide-decks, the skills also correspond to creating data visualizations for reports and other types of communication media.

Additional Resources

Communicating and Disseminating Evaluation Results Worksheet (PDF, 2 pages)   

Evergreen Data: Stephanie Evergreen is a data visualization consultant who has authored two great books on data visualization. Her website and blog offer useful free resources, including the Qualitative Chart Chooser referenced above. You can join the Data Visualization Academy for more robust assistance.

DiY Data Design offers online courses and coaching around data visualization needs.

Data Analysis: Share Your Findings (Online Course, free account required to log in) In the forth section of the NSVRC Data Analysis Online Course, you’ll learn about data visualization to report and summarize your findings.

 

References

Curtis, M. J., & Kukké, S. (2014). Activity-based assessments: Integrating evaluation into prevention curricula. Retrieved from the Texas          Association Against Sexual Assault: http://www.taasa.org/wp-content/uploads/2014/09/Activity-Based-Assessment-Toolkit-Final.pdf

Evergreen, S. (2013a, December 19). 8 steps to becoming a reporting Rockstar [Video file]. Retrieved from https://vimeo.com/82318228 

Evergreen, S. (2013b). Evaluation report layout checklist. Retrieved from http://stephanieevergreen.com/wp-content/uploads/2013/02/ERLC.pdf

Evergreen, S. (2015). What TLDR means for your evaluation reports: Too long didn’t read (let’s fix that). Retrieved from http://stephanieevergreen.com/wp-content/uploads/2015/11/TLDRHandout.pdf

Gulbis, J. (2016, March 1). Data visualization – How to pick the right chart type? Retrieved from https://eazybi.com/blog/data_visualization_and_chart_types/

Lyons, J., & Evergreen, S. (2016). Qualitative chart chooser 2.0. Retrieved from http://stephanieevergreen.com/wp-content/uploads/2016/11/Qualitative-Chooser-2.0.pdf

Pankaj, V., Welsh, M. & Ostenso, L. (2011). Participatory analysis: Expanding stakeholder involvement in evaluation. Retrieved from http://www.pointk.org/client_docs/innovation_network-participatory_analysis.pdf


Comprehensive primary prevention work requires that we work beyond the individual and relationship levels in order to create deep and meaningful change in our communities. The prospect of evaluating beyond the individual and relationship levels of our work can seem daunting. However, although the potential scope of data collection is larger for community-level initiatives, the evaluation principles and practices are the same.

A strong theory of change that details how your short-, mid-, and long-term outcomes relate to each other will help you identify this synergy and point to opportunities for shared or related indicators across levels of the social-ecological model (SEM). Without a solid theory about these connections – and programming to address that theory – your work might not yet be ready to be evaluated beyond the formative level (see, e.g., Juvenile Justice Evaluation Center, 2003).

Part of what makes this work tricky is that research has not caught up with the current demands of practice to change risk factors across the ecology to meet our ultimate goal of preventing violence (Centers for Disease Control and Prevention [CDC], 2024; Degue et al., 2014). That is, while we have good theories to guide us, we do not have a large enough evidence-base from the literature to tell us that if we want to create X societal change, then we need to do Y (or, more likely, that we need to do A+B+C+Y). Thus, through evaluating our efforts, we are in a position to help build evidence around what works, guided by the current state of theoretical knowledge about community change and evaluation. Go us!

In order to demystify evaluating community- and societal-level changes, it is important to remember a few issues about the socio-ecological model. First, the ecological model does not exist in the real world. That is to say, the domains of our lives interact dynamically all the time and are difficult to separate into distinct and discrete domains at any given point in time. Relatedly, the cumulative synergy of programming (at various levels) is what works to achieve outcomes. This is the point of comprehensive initiatives. These outcomes should not necessarily be driven/associated with a given level of the SEM, especially as we move toward mid-term and longer-term outcomes.

For an example of related indicators, consider this idea, provided in the section on observational data collection:

Imagine that you are implementing programming within a middle school to increase bystander interventions. You work within individual classrooms to increase students’ motivation to intervene and give them the skills to do so. You also work with teachers to help them understand the importance of these efforts and give them bystander skills of their own. Additionally, you work with school administrators to develop policies and procedures that support bystander intervention. You’re hoping that these efforts will change the culture of the school. In this instance, observational indicators of the change for students could involve observing whether or not someone intervenes when a situation warrants intervention. An indicator of whether or not the norms have changed in support of such behavior might include observing how other students, teachers, or administrators respond to the person who intervenes or to the situation after the student intervenes. You can see the connection here between individual, relationship, and community factors with the interactions between behaviors and responses.

Community and societal level changes are often longer term than are changes for individuals, so this means that we need to think about how to measure benchmarks en route to those longer-term changes.

The Washington Coalition of Sexual Assault Programs published a brainstorming template that might be helpful as you begin to identify benchmarks. 

Community Readiness

One way to think about benchmarks in your community-level work is to look at shifts in community readiness to create change around sexual violence. The Tri-Ethnic Center for Prevention Research has a model of community change that addresses nine stages of readiness over 5 dimensions. Their guidebook (2014) provides extensive guidance about how to assess readiness. Shifts in readiness would represent shifts in a community’s sense of ownership over the issue of sexual violence prevention. This has been successfully applied in the sexual violence prevention field by the North Dakota Department of Health and The Improve Group. They have created a series of tools to walk you through how to use the community readiness assessment that include a community leader survey, a series of how-to webinars, a discussion guide, and example reports. 

An additional consideration is measuring principles and values as an indicator of the way our work changes our communities. The Whole Measures model developed by the Center for Whole Communities uses a series of rubrics to evaluate their community-based efforts around ecological sustainability. Among the principles they measure are social justice and community building.

Evaluating Partnerships

Much of our community building work requires developing or expanding partnerships.This is complex and time-consuming work. Evaluation can help us to evaluate the effectiveness and growth of these partnerships. If you are beginning new partnerships to prevent violence at the community level, take a look at the tools on identifying partners, building partnerships, and sustaining partnerships from Veto Violence.

A lot of measures have been developed to evaluate community coalitions and partnerships. Here we highlight several that are currently being used in our field. If you are looking to establish an initial sketch of the strengths of a partnership and build a plan for growth, consider Prevention Institute's Collaboration Assessment Tool. The Center for Advancement of Collaborative Strategies in Health has developed an evaluated Partnership Self Assessment Tool that measures partnership synergy, a key indicator of a successful collaborative.  As one of the few validated tools for assessing partnerships, consider reading more about resources needed and best uses for this tool. Support and Training for the Evaluation of Programs (STEPs) developed a measurement tool menu for evaluating coalition building and community mobilization efforts that is a really great place to start if you are new to evaluating this type of work. 

Evaluating Environmental Approaches

While the development and evaluation of environmental approaches to sexual violence prevention is in its infancy, here are some reports and resources that might be useful as you are developing your approach.

Policy Evaluation

There are a number of policies with evidence of impact on gender inequality and related risk factors for sexual violence. Whether it be workplace policies addressing sexual and domestic violence, pay equity, or community alcohol density policies, comprehensive work to prevent sexual violence requires policy work. The CDC provides some general guidance on policy evaluation through a collection of briefs listed in the resource section below. Changing policies takes time and so does policy evaluation. In the context of sexual violence prevention, evaluating policy work often means evaluating how particular activities contribute to the development or implementation of a particular policy, whether that be an organizational policy or public policy. Use this worksheet with your team to identify potential measures for evaluating your policy activities. Space is provided to add ideas unique to your own work. The chart is broken out into 3 sections: measuring capacity to implement policy efforts, measuring factors supporting policy efforts, and measuring visibility of the policy efforts.

person raising handResources

A Guide to Measuring Policy and Advocacy (PDF, 48 pages): This guide from the Annie E. Casey Foundation provides information about and methods for evaluating policy and advocacy efforts that seek to change social conditions. They offer several case illustrations to help keep the information grounded in real, community-based work.

A Handbook of Data Collection Tools: Companion to “A Guide to Measuring Advocacy and Policy” (PDF, 45 pages) This collection of tools helps operationalize how advocates can measure core outcome areas identified in “A Guide to Measuring Advocacy and Policy.” This resource provides 27 sample tools and methodologies.

Policy Evaluation Briefs (PDF, 8 separate briefs) NCIPC developed this series of briefs to increase the use of policy evaluation methods in the field of injury prevention and control.The briefs and related appendices are intended to provide an increased understanding of the concepts and methodologies of policy evaluation. They are not intended to provide a “how-to” but rather to provide a solid foundation for exploring the utility of policy evaluation as a methodology and an overview of the critical steps and considerations throughout the process. Each of the briefs focuses on one specific aspect of policy evaluation; however, reading them all will provide a comprehensive overview of policy evaluation concepts and methodology.

Keeping the Collaboration Healthy (PDF, 3 pages) This tool developed by the Substance Abuse and Mental Health Services Administration presents some considerations for evaluating collaboration, common functions to evaluate, and examples of instruments that measure these functions.

Community Readiness for Community Change (PDF, 71 pages) This guide from the Tri-Ethnic Center for Prevention Research provides a comprehensive overview of the issue of community readiness to create change and a detailed plan for how to assess readiness in your own community.

North Dakota Department of Health Community Readiness Assessment (Online Resources, links to PDF files and YouTube videos) In 2017 the North Dakota Department of Health (NDDoH) Division of Injury and Violence Prevention conducted a statewide community readiness assessment for sexual and intimate partner violence primary prevention work. This toolkit provide assess to their community assessment survey instruments, instructional webinars, discussion guides and example reports.

Evaluating Comprehensive Community Change (PDF, 37 pages): This report from Annie E. Casey Foundation focuses on struggles and solutions related to evaluating comprehensive community initiatives.

Evaluating Comprehensive Community Initiatives (Online Resource) This section of the Community Tool Box, an online community development resource from the University of Kansas, focuses on evaluating comprehensive community efforts in a participatory manner. It offers a robust discussion of the challenges of this work alongside a model and series of recommendations for how to do this work.

Whole Measures: Transforming Our Vision of Success (PDF, 68 pages) This guide from the Center for Whole Communities introduces the model of Whole Measures, a rubric-based assessment tool that examines ten groups of practice related to doing comprehensive community work. Although the guide is designed for organizations working on ecological issues, most of the domains are directly relevant to sexual violence.

References

Centers for Disease Control and Prevention. (2014). Sexual violence:  Prevention strategies. Retrieved from Preventing Sexual Violence | Sexual Violence Prevention | CDC

DeGue, S., Valle, L. A., Holt, M. K., Massetti, G. M., Matjasko, L., & Tharp A. T. (2104). A systematic review of primary prevention strategies for sexual violence perpetration. Aggression and Violent Behavior, 19, 346-362. doi:10.1016/j.avb.2014.05.004

Juvenile Justice Evaluation Center. (2003). Evaluability assessment: Examining the readiness of a program for evaluation. Retrieved                  from the Justice Research and Statistics Association: http://www.jrsa.org/pubs/juv-justice/evaluability-assessment.pdf

Tri-Ethnic Center for Prevention Research. (2014). Community readiness for community change (2nd ed.) Retrieved from https://tec.colostate.edu/communityreadiness/


What does it take to do good evaluation?

When you read the question above, do you immediately think about data analysis skills?  Or survey design skills? While an evaluator needs skills in data collection and analysis, these are only two of the many areas of knowledge and skill that help an evaluator succeed.  An organization’s capacity to conduct evaluation stretches well beyond any one individual’s knowledge-level or skill sets.  

Organizational Level

Recent research on evaluation capacity building highlights the importance of organizational factors in building and maintaining evaluation capacity (Taylor-Ritzler, Suarez-Balcazar, Garcia-Iriarte, Henry, & Balcazar, 2013).

“Even when individual staff members have the knowledge and motivation to engage in evaluation activities such as mainstreaming and use, these activities are less likely to occur if their organization does not provide the leadership, support, resources, and necessary learning climate” (p. 200).

Capacity building efforts often focus on building the skills of a few particular staff members. Agencies may also create a dedicated evaluation position as an attempt to increase both capacity to do evaluation and the actual practice of evaluation. Sometimes these efforts are not successful because other critical steps have not been taken to make sure the culture of the organization is one that supports and encourages each aspect of evaluation. For example, if the organization is resistant to change, will data that supports change be ignored?

Members of organizational leadership may not need robust evaluation skills themselves, but leaders need to be evaluation champions to help integrate an evaluative mindset into the organization. Moreover, they need to know enough to understand the importance of providing adequate resources for evaluation efforts. Learn from preventionists around the country in this bulletin about ways that organizations can nurture a culture of evaluation.

Practitioner Level

“Like medicine, evaluation is an art and a craft as well as a science. Becoming a good evaluator involves developing the pattern-spotting skills of a methodical and insightful detective, the critical thinking instincts of a top-notch political reporter, and the bedside manner and holistic perspective of an excellent doctor, among many other skills” (Davidson, 2005, p. 28).

What does it take to be a good evaluator?  It depends on a lot of factors, but most importantly it depends on what type of evaluation will be conducted.  Certain evaluation methods involve higher levels of statistical analysis than others, and certain approaches to evaluation require more process facilitation than others. Evaluators often specialize in particular methods and approaches and work collaboratively with evaluators with different skill-sets when needed. The trick is to understand the limits of your own scope of skills and knowledge, and to know when to bring in help. (In fact, this is one of the American Evaluation Association’s guiding principles.)

Generally, many skills that evaluators often need overlap nicely with the skills preventionists use in their daily work (The Canadian Evaluation Society, 2018). See the following list for a few examples:

  • Meeting and group facilitation
  • Conflict resolution
  • Self-reflexivity
  • Critical thinking
  • Pattern identification
  • Interpersonal communication

Preventionists who have already developed many of the skills above can supplement them with formal or on-the-job training in evaluation-specific skills like data collection and analysis and thereby possess a strong evaluation skill set.

Building Capacity

Many people and organizations try to approach evaluation capacity building as a one-time activity. Since evaluation is a broad field-of-practice and doing good evaluation requires particular individual skills and motivation in addition to organizational buy-in (Taylor-Ritzler, Suarez-Balcazar, Garcia-Iriarte, Henry, & Balcazar, 2013), evaluation capacity building is much more likely to be effective when engaged in as an ongoing process.

When embarking on a process of capacity building, consider having your organization complete an evaluation capacity assessment to identify the most critical evaluation-related needs and developing a plan for how to meet them. Some resources and ideas for doing so are outlined below.

Low or No Cost

A variety of high quality resources are available to meet skill and knowledge-building needs, including many free or low cost online resources. This toolkit provides links to many of them that might be especially relevant to your primary prevention efforts. You can also visit our collection of Self-Study Guides for inspiration on how to build knowledge related to specific sub-areas of evaluation. Websites such as Coursera and Edx provide access to free online courses from universities around the world. These websites often have courses related to general data analysis as well as courses on specific tools like social network analysis.

Investments

Hiring an evaluator to provide capacity-building services is a great investment if it is one you can make. If you are unable to hire an evaluator, you might be able to find university professors or graduate students who will provide pro-bono or reduced-rate capacity building services. Working with an outside evaluator, as opposed to an evaluator you have on staff within your organization, might be the best way to address organization-level issues related to evaluation capacity. Remember that not all researchers or evaluators are skilled capacity-builders, so you’ll want to make sure that you seek out a person who identifies this as one of their areas of expertise.

Consider joining the American Evaluation Association (AEA) to take advantage of ongoing learning opportunities, including Coffee Break webinars, eStudy courses, evaluation journals, and more.

Several evaluation conferences are held throughout the United States each year, many of which include learning opportunities that are tailored for everyone from beginners to experts. Check out the following options:

You might also have access to similar events through regional evaluation groups.

person raising handGeneral Resources

Cultivating Evaluation Capacity: A Guide for Programs Addressing Sexual and Domestic Violence (PDF, 58 pages): This guide from Vera Institute of Justice explores evaluation capacity building specifically for antiviolence organizations. It includes an evaluation capacity assessment tool, tips for building evaluation capacity, and a variety of useful resources. 

Competencies for Canadian Evaluation Practice (PDF, 17 pages) This document from the Canadian Evaluation Society discusses competencies for evaluators along 5 different practice domains – Reflective, Technical, Situational, Management, and Interpersonal.

Evaluation Capacity Assessments

Evaluation Capacity Assessment Tool (PDF, 7 pages) This assessment, developed by Informing Change, examines aspects of both organizational- and staff-level evaluation capacity and can be self-administered.

Capacity and Organizational Readiness for Evaluation (CORE) Tool (PDF, 1 page) This short tool developed by the Innovation Network specifically focuses on organizational issues related to evaluation capacity.

Primary Prevention Capacity Assessment Tool (PPCA) (PDF, 10 pages) This tool, developed by the Ohio Rape Prevention and Education Team was developed to identify and prioritize training and technical assistance needed to build prevention and evaluation capacity within the RPE program.

LightbulbVideos

Watch the NSVRC Mapping Evaluation Podcast Series.

trainer standing at table of peopleActivities

The Training and Capacity Building slide deck and activity guide have several training and meeting activities focused on building capacity around a variety of evaluation skills.

Practice Tip

Even small changes can shift an organizational culture toward increased support for evaluation. For example, members of organizational leadership can look for opportunities to transparently and consistently collect, analyze, and respond to data in their day-to-day roles. This could include efforts as informal as taking real-time polls in staff meetings about relatively low stakes issues to more formal and involved processes.

The important piece of this is that staff members see leadership taking data seriously by

  • discussing points of learning and actually making changes as a result of what was learned or
  • encouraging staff members at all levels to incorporate evaluation into their work in formal and informal ways.

Staff members who are not part of formal leadership can also model evaluation driven decision making and highlight the ways that evaluative processes improve their work.

References

The Canadian Evaluation Society. (2018). Competencies for Canadian evaluation practice. Retrieved from https://evaluationcanada.ca/files/pdf/2_competencies_cdn_evaluation_practice_2018.pdf

Davidson, E. J. (2005). Evaluation methodology basics: The nuts and bolts of sound evaluationThousand Oaks, CA: Sage.

Taylor-Ritzler, T., Suarez-Balcazar, Y., Garcia-Iriarte, E., Henry, D. B., & Balcazar, F. B. (2013). Understanding and measuring evaluation capacity: A model and instrument validation study. American Journal of Evaluation, 34, 190-206. doi:10.1177/1098214012471421


Uses

Evaluation cartoonFor data collection through observation or other generally qualitative methods, rubrics provide a quick way to classify or categorize data based on a set of pre-determined criteria. Criteria-based rubrics can also help with data synthesis from a variety of sources so that the data work collectively toward an answer to an evaluative question (Davidson, 2005).

Rubrics can speed up data collection and help reduce subjectivity while also increasing the likelihood that everyone who collects data will be looking for and recording approximately the same thing. (This is also known as inter-rater reliability.)

file folderExamples and More Information

E. Jane Davidson’s work around actionable evaluation makes a compelling case for rubrics as a meaningful conglomeration of criteria that can help answer questions that might seem difficult to answer through data collection (Davidson, 2005).

A great example of this in the prevention world can be found in Ohio where they use a rubric for assessing grantee capacity around a variety of prevention principles, skills, and orientations (Stevens & Ortega, 2011). The rubric is based on a tool originally produced in the Virginia Sexual and Domestic Violence Action Alliance’s Guidelines for the Primary Prevention of Sexual Violence & Intimate Partner Violence (2009) and has criteria to measure domains such as program comprehensiveness, socio-cultural relevance, sustainability, and evaluation use. You can check the rubric out on the website for Ohio’s Empowerment Evaluation Toolkit housed on the website of the Ohio Domestic Violence Network (Stevens & Ortega, 2011).

The Center for Whole Communities developed a rubric for evaluating deep community change work that focuses on a variety of domains relevant to sexual violence prevention work, including community building and justice and fairness (Center for Whole Communities, 2007). There’s even an online tool to facilitate using this rubric with community groups.

Other preventionists have used rubrics to assess facilitation skills of a person implementing education-based prevention programming. Activity-based evaluation/assessment also relies on rubrics and other scoring tools to assist with observational data collection (Curtis & Kukké, 2014).

References

Center for Whole Communities. (2007). Whole Measures: Transforming our vision of success. Retrieved            http://measuresofhealth.net/additional_resources/moh_download.shtml

Curtis, M. J., & Kukké, S. (2014). Activity-based assessments: Integrating evaluation into prevention curricula. Retrieved from the Texas            Association Against Sexual Assault: http://www.taasa.org/wp-content/uploads/2014/09/Activity-Based-Assessment-Toolkit-Final.pdf

Davidson, E. J. (2005). Evaluation methodology basics: The nuts and bolts of sound evaluation.

Stevens, A. B., & Oretga, S. (2023). The Primary Prevention Capacity Assessment matrix. Retrieved from the Ohio Domestic Violence Network:        https://www.odvn.org/empowerment-evaluation-toolkit/

Virginia Sexual and Domestic Violence Action Alliance. (2009). Guidelines for the primary prevention of sexual violence and intimate                   partner violence. Retrieved from https://vsdvalliance.org/change-culture/prevention-in-virgnia/


It is easy to get caught up in the idea of using measures that have been validated through research studies (e.g., the Illinois Rape Myth Acceptance Scale) as your tools of choice. While there are definite benefits to this (see below), there are also significant drawbacks. As with all things in evaluation, using existing measures should be done very deliberately and should not be considered the default option when planning your evaluation.

Benefits

  • Existing measures have often undergone rigorous testing to ensure that they are actually measuring what they claim to be measuring (validity) and that they can do so repeatedly (reliability). As a result, using them strengthens your case for saying you are measuring the change you seek to measure with them.
  • The hard work of drafting, re-drafting, revising, and re-revising questions is already done for you! Writing good questions is an art and a science and is far more difficult than most people imagine. Previously validated measures that have been developed by experts are likely not to fall prey to common pitfalls in survey development and also save you a lot of time in development.
  • The use of existing measures can sometimes increase the buy-in from other partners, especially funders.

Drawbacks and Cautions

  • Not all measures that are widely used or found in the research are actually valid and reliable.
  • Even if they are valid and reliable, they may not be appropriate for the community or population with whom you are working. For example, many instruments are validated through research conducted with college populations. In contrast, many (if not most) prevention workers are working with younger populations. More than just being at a different educational level, the developmental level also influences how concepts and constructs are understood. The measures we use need to be appropriate to our given populations.
  • In addition to being appropriate to our given population, our measures need to be specific to the change we are interested in creating (i.e., our outcomes) and related to the intervention(s) we are employing to make that change. Sometimes existing measures are chosen for the above benefits without sufficient consideration to their direct applicability to the work being measured; if they don’t speak to your outcomes and programming, then the benefits are irrelevant because the data won’t be useful to you.
  • Measures can become outdated, especially measures that seek to assess particular social constructs like gender-role socialization. These constructs change over time, as do the indicators of them. For example, the Attitudes Toward Women Scale (Spence, Helmreich, & Stapp, 1973) includes the following item: “It is ridiculous for a woman to run a locomotive and for a man to darn socks.” The short version of this measure is from 1978. Nearly 40 years later, most Americans probably assume that it’s ridiculous for anyone to darn socks, but that’s not necessarily because there’s been a drastic shift in our cultural perceptions about gender.
  • Starting with validated measures and editing the questions to fit your audience or issue is not necessarily a bad idea and might very well be better than starting from scratch. However, it is important to remember that changing anything about the instrument means it is no longer valid and reliable and is then ultimately an untested instrument

 

Additional Resources

file folderWant more guidance on selecting, adapting, and evaluating prevention approaches? Check out this guide from the Division of Violence Prevention. This guide supports good decision making that balances delivering prevention approaches as intended with considering unique community contexts.

Measures Database (PDF, 2 pages) This database maintained by the Wisconsin Coalition Against Sexual Assault (WCASA) includes resources where you can find free measures, scales, or surveys specific to sexual assault prevention work. Some measures/scales are general examples and others are "standardized measures". Many examples are provided; there are pros and cons to each measure and WCASA does not endorse any specific options. Please contact NSVRC at prevention@nsvrc.org for assistance in identifying appropriate measures.

Evidence-Based Measures of Bystander Action to Prevent Sexual Abuse and Intimate Partner Abuse Resources for Practitioners (PDF, 31 pages) This document is a compendium of measures of bystander attitudes and behaviors developed by the Prevention Innovations Research Center. Some of the versions of the measures have been researched more thoroughly in terms of psychometric properties than others. Please see the citations provided for articles that describe the versions of our measures that have been published. See also, Evidence-Based Measures of Bystander Action to Prevent Sexual Abuse and Intimate Partner Violence: Resources for Practitioners
(Short Measures) (PDF, 22 pages) which provides administrators of prevention programs with shortened, practice-friendly versions of common outcome measures related to sexual abuse and intimate partner violence. These measures have been analyzed to develop a pool of scales that are concise, valid, and reliable.

 

light bulb

Listen to NSVRC's Resource on the Go Podcast, How to Measure a Sense of Community (Audio, 17 minutes) where, NSVRC’s Evaluation Coordinator, Sally J. Laskey, talks with researchers Iris Cardenas, a PhD Candidate in the School of Social Work at Rutgers University, and Dr. Jordan Steiner about the Brief Sense of Community Scale and their study that examined the cultural relevance of the Brief Sense of Community Scale with non‐Hispanic, Black, and Hispanic college students.

 

 

References

Spence, J. T., Helmreich, R., & Stapp, J. (1973).  A short version of the Attitudes toward Women Scale (AWS). Bulletin of the        Psychonomic Society, 2, 219-220. doi:10.3758/BF03329252


To increase participant engagement with questionnaires, they can be administered through interactive means rather than on a piece of paper or through a computer. Interactive options have the potential to be more engaging for participants by increasing their kinesthetic involvement and feeling less like tests (Dodson & Paleo, 2011).

Consider the following examples:

Respond by Moving

If your questions involve distinct choices (e.g., strongly agree, agree, disagree, strongly disagree), you can have signs up in a room that correspond to each answer choice, read the question aloud and ask people to move in the room to the answer that they would like to endorse. You would then record the number of people in the room who endorse each response option and move on to the next question. This method does present a bit more social desirability bias since people are responding in front of their peers; however, this is not inherently problematic. Since we often make decisions or take action while surrounded by our peers, this method can actually mimic the real world in ways that completing the questionnaire in private does not. Keep in mind also that this method might present difficulties for people with limited mobility and also requires a room with space for people to move around (see NSVRC, 2015).

Adhesive Formats

You can also change the way participants respond to items on questionnaires by having them use stickers, dots, sticky notes, etc. to indicate their responses (Dodson & Paleo 2011). This can be done in a manner similar to the respond by moving option outlined above where participants publically endorse an option by posting a sticky note on an answer. For example, you can have flipcharts up in the room that indicate response options. Participants are then given a dot and asked to place the dot on the flipchart that represents their answer. Or you can give people sticky notes on which they can write a few sentences explaining why they are responding the way they are and then stick that note on the appropriate flipchart. This will give you qualitative data for additional context. Alternatively, you can also give participants stickers to use to endorse answers on their own pieces of paper. This can be as simple as creating a grid of response options and asking them to put dots in the corresponding box to indicate their response to various options.

lightbulb

Adhesive Formats: Using Dots, Stickers, and Labels for Data Collection (Online Presentation) Presentation by Lyn 
Paleo to the American Evaluation Association Annual Meeting, November 4, 2012, Minneapolis.

Human Spectrogram (Free online course (15 minutes), requires login) This interactive learning tool from the NSVRC walks participants through how to implement an activity-based evaluation through the use of the human spectrogram. Information covered includes an overview of the human spectrogram, a preparation checklist, and additional resources from the NSVRC Evaluation Toolkit.

 

References

Dodson, D. & Paleo, L. (2011, October 31). Denece Dodson and Lyn Paleo on “Adhesive Formats” as an alternative to evaluation post-tests [Blog post]. Retrieved from the American Evaluation Association: http://aea365.org/blog/denece-dodson-and-lyn-paleo-on-“adhesive-formats”-as-an-alternative-to-evaluation-post-tests/

National Sexual Violence Resource Center. (2015, August 12). Program Evaluation & Best Practices Forum. xChange Forum.


Colleges and universities can be great resources for agencies that need assistance with developing evaluation processes for their work. Depending on your budget and the scope of your evaluation, you might want to look at hiring an external evaluator that might be a professor or graduate student for assistance. Also, it’s not unusual for students to need program evaluation internships or projects for classes. Good places to look include public health, social work, community psychology, and education departments. The more you know about evaluation when you initiate these partnerships, the more likely you are to get meaningful and useful results from the process.

 

When working with colleges and universities, it’s helpful to keep the following in mind:

Research and evaluation are different

Evaluation cartoonThe differences between the goals of research and the goals of evaluation are often misunderstood.

In the opening of her book Transformative Research and Evaluation, Donna Mertens explains the difference in this way:

Research is defined as a systematic method of knowledge construction; evaluation is defined as systematic method of determining the merit, worth, or value of a program, policy, activity, technology, or similar entity to inform decision making about such entities (Mertens, 2009, p. 1)

While the two often employ similar methods for data collection and analysis, evaluation’s explicit purpose is to make value judgments about our work, and the production of actionable data is central to the program improvement aspect of evaluation. Research, on the other hand, is more focused on the generating knowledge. While both of these processes are important for our work overall (for example, research helps build the theories guided by which we can build our programming), mixing them up can result in the development of tools and methods that are less useful for agencies doing work in communities.

Many agencies have embarked on partnerships with universities only to finish the partnership with a bunch of materials they don’t know how to use, cannot implement sustainably, and/or don’t see the meaning in.

However, if you can go into the partnership with a little bit more knowledge about what you need from an evaluation and what you hope to achieve, you’re much more likely to form a partnership with meaningful results.

The same is true for hiring professional evaluators who are not affiliated with universities. Evaluators tend to specialize in certain methods, approaches, and types of evaluation and might also specialize in certain social issues. Understanding what you want and need from an evaluation can help you find an evaluator who will be a good fit.

file folder

Read a case example of a successful university partnership.

 

 

 

figure sitting a desk raising handResources

 

 

Evaluation for Improvement: A Seven Step Empowerment Evaluation Approach for Violence Prevention Organizations (PDF, 104 pgs.)  This manual provides step-by-step information on how to hire an empowerment evaluator with the goal to build organizational capacity for evaluation. 

References

Mertens, D. M. (2009). Transformative research and evaluation. New York, NY: Guilford Press.


Rape crisis centers, other sexual violence service providers, and community-based organizations often struggle to maintain adequate resources to serve their communities. When this is the case, it can feel difficult to earmark money for evaluation. However, evaluation is a critical and integral part of accountability and provision of effective, high quality services. Additionally, it’s still true that resources can be tight and that some funders, while requiring evaluation, impose restrictions on how much money can be spent on evaluation practice. Since evaluation comprises a spectrum of activities, it is possible to do meaningful evaluation of sexual violence prevention work on a minimal budget. The following tips can guide you in doing just that and, in the process, building capacity to do more robust evaluation when/if the resources allow.

  • Consider partnering with colleges/universities. There are tricks to doing this well. Even if they can’t engage in a long-term partnership with your agency, professors or students from colleges or universities might be able to point you in the direction of existing tools and resources that are easy to access and easy to implement. They may also work with you to develop an evaluation plan for you to implement.
  • Keep your scope small by focusing on the most critical questions. When you think about possible evaluation questions for your program, which ones rise to the top as the most critical to answer? Which questions, if answered, would provide the most actionable data – that, is, which answers will you clearly be able to act on in meaningful ways? Then consider the data sources that might help you answer those questions. Which important questions do you have the resources to answer? At this point you might notice that you do not have the resources to answer the questions that feel most critical. If that’s the case, consider options for collecting data that point to the answer or consider ways to raise funds or leverage other resources toward answering those questions. If you have a clear sense of the questions you need to answer and how that evaluation will positively impact your initiatives, it might be easier to find funders or other investors who are willing and able to put resources toward your evaluation efforts. Consider sources of existing data that might help you approximate answers to your questions.
  • Let your needs and means guide the rigor. While it’s certainly ideal to collect more than one type data to answer a given question or to have multiple people involved in data collection, sometimes that may not be feasible due to resource restrictions. If you are primarily using the data for program improvement and learning purposes, this won’t be as big of a deal as it would be if you were hoping to make claims about your program’s overall effectiveness. Just be aware of the limits of the claims you can make when you decrease the rigor of your evaluation. For example, if you are working in multiple communities but only collect data in one of those communities, any success or challenge evident in the data does not necessarily apply to your program as a whole. Don’t let restrictions or limitations keep you from doing anything at all - focus on what you can learn from what you can do. For example, you might be able to interview a few community members or hold a small focus group of participants. Integrated data collection methods like Activity Based Evaluation can be relatively low-resource options for curriculum-based work.
  • Collaborate with others in the community. Since sexual violence prevention work often involves community coordination – partnering with other organizations, infusing sexual violence prevention messages into other organizations’ services, etc. – there are also opportunities to collaborate on various stages of evaluation. Perhaps someone else is already collecting the data you need or needs the data you are collecting and can offer resources toward the analysis and interpretation of the data. Schools often survey parents of their students. If you are working in collaboration with school teachers or administration, you might be able to access the data they collect (if it is relevant to your work) or even add questions of your own to their survey.

booksResources

Evaluation on a Shoestring: (Online Article) This article from BetterEvaluation provides ideas and resources for conducting evaluations with limited funding.

How to Tame the “Survey Beast”: Overview of strategies for reducing the time and resources needed to conduct evaluation surveys on a shoestring budget (PDF, 2 pgs) The following tips are from the Ohio Primary Prevention of Sexual and Intimate Partner Violence Empowerment Evaluation Toolkit and designed to reduce the burden of managing surveys and to make the most of the data you collect.


In a effort to learn from preventionists and evaluation partners around the country, this section of the toolkit provides case examples and case studies in sexual violence prevention evaluation. If you have a lesson learned that you would like to share, you can submit your case examples for consideration by filing out this brief

 

Case Study: Culturally Relevant Evaluation of Prevention Efforts 

This case study examines the evaluation process of a violence prevention curriculum called “Walking in Balance With All Our Relations: A Violence Prevention Curriculum for Indigenous People.”

You can also watch this recorded webinar with the authors of the case study. Evaluating Culturally-Relevant Sexual Violence Prevention Initiatives: Lessons Learned with the Visioning B.E.A.R. Circle Intertribal Coalition Inc. Violence Prevention Curriculum

Case Example: University Partnership/Mixed Methods

This case example provides a description of a partnership between a rape crisis center and a university to create a meaningful pre/post-test questionnaire to evaluate their prevention program.

Case Example: Participatory Evaluation at the Local Level

This case example provides a description of participatory evaluation within a school-based prevention program were staff worked with student leaders to develop outcome measures, provide observations, and analyze and interpret the data collected. 

Case Example: Conducting a Statewide Survey Focused on Risk and Protective Factors

This case example provides an overview of how one state created and used a survey to identify risk and protective factors for sexual violence. Read more about the process in this blog post Not for the Faint of Heart: Conducting State-wide Surveys.


When disaster strikes, it tears the curtain away from the festering problems that we have beneath them.

- Barack Obama

So why are we even talking about evaluation during a crisis or disaster? Crisis, by definition, is an unpredictable period of immense struggle, threat, and difficulty. As we grapple with disasters, those of us who are working to implement and evaluate prevention programs are often left with hard questions and uncertainty. What we do know is that disasters disrupt the physical and social environments that shape individual and community health and well-being. Evaluation strategies can offer vital tools to find out what communities in crisis need, and can help us center social justice during disasters. Making quick adaptations to programs and evaluation plans in the context of rapidly changing norms and environments can pose significant challenges. This section of the Toolkit doesn’t provide all the answers, but does include insights from evaluators and some guideposts for how evaluation work during crisis situations can be a tool for prevention. 

As people who work to prevent sexual assault, abuse and harassment try to meet existing and emerging needs amidst crisis, evaluating newly introduced methods can be daunting. We are generally already overburdened with accommodating rapid changes in our work individually and organizationally, so streamlining evaluation often requires significant thought and consideration. This can take many shapes, including pausing evaluation efforts or shifting their focus altogether. 

Ethics and Trauma-Informed Evaluation

Sexual assault prevention evaluation puts the needs of community members first. This requires working with community members in order to identify needs and ensure that efforts are centered around their voices. This will help to make sure the evaluation work can be used to improve people’s lives and not cause more harm. So, what might this look like in the context of a disaster or crisis?  

When evaluating sexual violence prevention work, it is helpful to think about trauma-informed approaches to evaluation.  A first step is to consider the 6 Guiding Principles to a Trauma-Informed Approach (Center for Preparedness and Response, 2022).

6 Guiding Principles to a Trauma-Informed Approach

After grounding yourself in the principles of trauma-informed approaches, you can then explore specific examples of how communities have prioritized equity and community wellbeing in the wake of catastrophic events in this brief from Prevention Institute. Using a trauma-informed approach will help you consider your evaluation questions, use different evaluation approaches, and adapt your data collection methods

The United Nations Development Programme Independent Evaluation office has published a useful infographic to help organizations understand important parameters when evaluating their programs. The infographic outlines how to 1) Rethink evaluation plans and teams 2) Evaluate the impact of the crisis at hand 3) Collect data remotely 4) Engage stakeholders virtually 5) Share evaluations globally, and 6) Connect with evaluation networks. 

For an example of how Rape Prevention and Education (RPE) program evaluators worked with RPE programs in Michigan to share the story of how they pivoted their work during the COVID-19 Pandemic, read this report, Community Connections: Key in the COVID-19 Pandemic

Skills Evaluators Bring to Crisis Situations

The COVID-19 pandemic has shown the importance of evaluation work and highlights the skills that evaluators bring to the collective good. In their blog, “How can we use evaluation in this time of community crisis?,” the Emergence Collective outlines the ways in which nontraditional evaluation methods are often helpful in times of difficult decision making. They have compiled a list of questions for evaluators to consider. This resource also includes questions related to organizational decision-making and impact evaluation and a list of evaluation activities and their intended outcomes. The Emergence Collective pulls from Michael Quinn Patton’s book Developmental Evaluation as well as his blog Evaluation Implications of the Coronavirus Global Health Pandemic Emergency, in addition to Bridgespan’s collection of COVID-19 response resources

When looking specifically at the role of evaluators, Miranda Yates reflects on her experience in youth and family services to highlight lessons learned during the global pandemic in an AEA Blog. Yates (AEA365 Blog, 2020) offers a list of practical tools and ideas which have served her and her organization during times of crisis:

  • Listen to people on the front lines and be open to pivoting as needed.
  • Focus on identifying and responding to the pressing needs that are emerging.  
  • Anticipate needs. While a general offer of help is excellent, concrete proposed ideas of what you could do are even better. At this moment, many people are not in a space to connect the ways that data and evaluation might help with managing the immediate crisis and with laying more solid groundwork for what is to come.  
  • Offer up your project management, communication and facilitation skills for whatever is needed to help organize efforts. 
  • Respond to the immediate while laying groundwork for identifying and supporting needs in the long term.
  • Keep a systems’ lens.  
  • Tap into available expertise and draw upon your partnerships and connections. 
  • As much as possible, share what you are learning. 

The full blog includes examples for each suggestion, as well as outside links to relevant resources. 

Adaptation: Adjusting Evaluation Strategies and Evaluation Questions

Adaptation is common in the sexual violence prevention field, and we have tools that can help us think through how to evaluate those adaptations. Evaluation is always evolving, and this often happens at a faster pace during and after disasters. Although adaptation is key to understanding emerging needs, disasters don’t happen in a vacuum - evaluators might find themselves affected by the very same barriers faced by the people they work with. Strategies must often be conceptualized on the fly, under high stress conditions. In order to reimagine strategies, pausing some work may be required. It may be natural to assume that work can be picked up again once the crisis is over. However, as Marian Urquilla of the Center for Community Investment notes, we must be mindful that 1) it’s impossible to know how long a crisis will last, and 2) assuming things will return to normal “when all this is over” underestimates the larger lasting impacts of a crisis. (Urquilla, 2020) After a crisis, the world will be a different place than it was before.  It’s vital to be aware of what assumptions might be shaping your strategies. For example, Did your affordable housing strategy assume a hot market? Did your leadership development program assume intensive in-person sessions with lots of time for informal relationship building?

The Center for Community Investment provides a useful collection of triage tools to help assess shifting priorities, including an annotated tool, a sample tool, a blank tool template, and instructions for facilitating group use of the tool. These tools can help identify not only new strategies, but also the ways in which questions need to change. Using the example of the COVID-19 pandemic, the UNFPA provides an overview of adapting evaluation questions to help organize evaluation criteria. Their questions help to pinpoint the relevancy, effectiveness, efficiency, coherence/coordination, and sustainability of country level programs during a crisis, and can serve as an example for other organizations and evaluators. In that same vein, DEVTECH shared a simple crosswalk chart of typical evaluation strategies and added in a column with possible adaptations during COVID-19. 

Tools

Virtual Adaptation Guidance for Sexual Violence Prevention Curriculum and Interventions (PDF, 1 page) This chart provides and overview of many key sexual violence prevention curricula and interventions that have provided virtual adaptation guidance resources to their implementers. 

It’s OK to Move Things Online

Disasters not only change the content of the work, but also how you do it since disruptions in normal methods of connection can occur. Transitioning to more flexible methods of communication, research, and community building is key. Creating more innovative virtual approaches and digital services can be useful. 

We talked about focus groups and interviews in the Data Collection section of the Toolkit. Here, we dig a little deeper to talk about conducting these virtually. Online focus groups can be a way to invite a group of people who are located in different places to a shared common space and connect with preventionists and/or program participants.

Research shows no significant difference in the outcomes and quality of virtual focus groups versus in person meetings (Underhill & Olmsted, 2003), and even finds that online focus groups are the ideal method when the work addresses sensitive topics of a personal nature- like health, sexuality, crime, politics, etc., as participants are more inclined to answer honestly and feel more comfortable (Forrestal, D’Angelo, & Vogel, 2015).

While there can be some technology challenges (Kite & Phongsavan, 2017), virtual spaces can help minimize certain mobility barriers for participants who may otherwise not be able to participate in in-person meetings. Hosting a web meeting can also bring together geographically diverse groups and contribute to a sense of community despite distance. 

Dr. Natalie E. Cook presents this short (13 min) overview on "Conducting Virtual Focus Groups" for Shine Lab (February, 2021). Virtual focus groups can be a valuable qualitative research and evaluation strategy during the pandemic and beyond (reducing barriers related to transportation and reading/writing).

Remote spaces aren’t only excellent avenues for data collection, but also offer opportunities to deliver community support, training, and facilitate positive interactions. Elizabeth DiLuzio and  Laura Zatlin of Good Shepherd Services outline a number of great strategies for before, during, and after a virtual gathering. For tips and best practices for facilitating engaging online events watch this web conference recording from PreventConnect.

As we move more into online spaces with support and prevention in mind, it's important to consider ethics and safety in the virtual world. The Safety Net Project has created an excellent overview on how to help survivors navigate privacy and safety risks when choosing to participate in online groups. Since we know that there will always be survivors participating in prevention work and prevention programs, these suggestions are important to consider for any evaluation activities that are occurring online. Some key suggestions are:

Wondering about the pros and cons of various online platforms? The Safety Net Project also provides an in depth review of Zoom options and privacy considerations, as well as a Comparison Chart of common tools for digital services to help understand what is the best technology to meet your needs. This checklist can help decision makers learn how to adapt policies and practices, make key decisions and train staff. TechSafety.org has a number of other great resources about digital safety available in both English and Spanish, in their Digital Services Toolkit

Beyond privacy and safety considerations, switching to online spaces for evaluation activities requires thoughtful guidance and support as well- especially if it’s a transition from work that was exclusively done in-person before. Great crowdsourced documents have emerged to support folks making this change. Doing Fieldwork in a Pandemic (Lupton, 2021), offers tips and references from researchers and evaluators about various digital research methods.

Do you have a story to share about how you have adapted or developed evaluation strategies in the context of a disaster? Submit your brief case example to be considered for inclusion in the Evaluation Toolkit. 

light bulb

Webinar Series Recordings

For more context and information on how the COVID-19 impacted sexual violence prevention programs and how they adapted, watch the recordings of the National Sexual Violence Resource Center and PreventConnect web conference series. Links to all the recordings are below.

References

Center for Preparedness and Response. (2020). Infographic: 6 guiding principles to a trauma-informed approach. Centers for Disease Control and Prevention. https://www.cdc.gov/cpr/infographics/6_principles_trauma_info.htm

DiLuzio, E. (Ed.). (2020, March 21). Reflecting on the role of evaluator during this global pandemic by Miranda Yates [Blog post]. AEA365. American Evaluation Association. https://www.semanticscholar.org/paper/Considerations-for-and-Lessons-Learned-from-Online%2C-Forrestal-D%E2%80%99Angelo/18be6b0bd0688d51ca9a52ea1969e81027500d3b 

Forrestal, S. G., D’Angelo, A. V., & Vogel, L. K. (2015). Considerations for and lessons learned from online, synchronous focus groups. Survey practice, 8(3), 2844–2752. https://www.semanticscholar.org/paper/Considerations-for-and-Lessons-Learned-from-Online%2C-Forrestal-D%E2%80%99Angelo/18be6b0bd0688d51ca9a52ea1969e81027500d3b 

Kite, J., & Phongsavan, P. (2017). Insights for conducting real-time focus groups online using a web conferencing service. F1000Research, 6, 122. https://doi.org/10.12688/f1000research.10427.1

Lupton, D. (Ed.). (2021, July) Doing fieldwork in a pandemic (Crowd-sourced document, rev. ed.) https://docs.google.com/document/d/1clGjGABB2h2qbduTgfqribHmog9B6P0NvMgVuiHZCl8/edit

National Network to End Domestic Violence, Safety Net Project. (2020) Online support groups for survivors [Webpage]. https://www.techsafety.org/online-groups 

Underhill, C., & Olmsted, M. G. (2003). An experimental comparison of computer-mediated and face-to-face focus groups. Social Science Computer Review, 21(4), 506–512. https://doi.org/10.1177/0894439303256541

Urquilla, M. (2020, March 31). Reimagining strategy in the context of the COVID-19 crisis: A triage tool [Blog post]. Center for Community Investment. https://centerforcommunityinvestment.org/blog/reimagining-strategy-context-covid-19-crisis-triage-tool


Blogs