0:00:00.0 Sally Laskey: Welcome to Resource on the Go, a podcast from the National Sexual Violence Resource Center on understanding, responding to, and preventing sexual abuse and assault. I'm Sally Laskey, NSVRC's Evaluation Coordinator. We have been getting a lot of requests related to bystander intervention, so we are providing a rebroadcast of a research-to-practice interview conducted by my former colleague Taylor Teichman and myself, with Dr. Rose Hennessy Garza from the School of Medicine and Public Health at the University of Wisconsin, Madison. Dr. Hennessy Garza is a prevention scholar, implementation scientist, and program evaluator focused on sexual violence prevention, women's health, and health disparities. She brings over a decade of experience in sexual violence prevention, advocacy, evaluation, and research to this conversation about measuring bystander intervention. [music] 0:01:16.2 SL: Measurement of bystander intervention is a continuous challenge. What are some best practices in measuring bystander outcomes to overcome common challenges? 0:01:27.0 Dr. Rose Hennessy Garza: That's a great question. And when we think about that from an implementation standpoint, I'm actually on my campus implementing a program or maybe working with one evaluator or researcher to do it. I wanted to bring up three challenges that I have seen and three recommended suggestions. The first being, and everyone knows this, that having positive intentions or attitudes to do something is much different than actually doing them. And we have a whole research literature to show that. Like, "I plan on exercising and eating healthy, but then the reality is that life is hard and there are challenges, and I don't always end up doing that." So, how do you implement a program? And then you're like, "I wanna measure change in behavior, because I know that added intentions are correlated but they're not the same." But my students and the participants are right here, so I give them a post-test. Well, I can't ask them "Have they increased their behaviors on their post-test?" because they have had 12 seconds to do so, and they need more time. 0:02:30.1 DG: So, most practitioners are asking about intentions and attitudes. Now, there's nothing wrong with that. It is a limitation. So, either reporting those findings in the context of intentions are a great predictor for behaviors is great. I would like to argue that I think that nowadays, we could do better, especially on college campuses or in schools, because a lot of these programs are becoming mandated. So, we know the exact students who are in the trainings. A lot of these are required online programs, and we have their email address. And I wonder what it would look like to reallocate resources, and to also have... This would be like an upward and downward conversation, reallocate funding priorities as well, to say, "What if instead of giving out those post tests, which we then have to compile or we have to create the link and we have to edit, we wait three to six months?" Because the students, most of these programs are in fall. They're gonna be here through at least December, probably through spring for many of them. We send them a follow-up link because we already have their email, and we ask them to actually fill out a follow-up survey to measure behaviors. 0:03:45.1 DG: Now, we're gonna have a lower response rate, and we know that, but we could maybe change the way we're looking at things, because we wanna know the actual experiences and behaviors of people on our campus or in the communities where we're doing this. So, that's just one suggestion that if we have access to these populations, could we consider reallocating our resources and moving beyond "Was the training helpful? Did you like it? Did it change the way you think?" to "What experiences," even if it's open-ended, "have you had since this training, and what have you done three to six months later or even a few years later?" And then we don't have a lot of money or resources in their field, but occasionally we get something like people like to go to football games and those tickets can be expensive. Or maybe there's an iPad or something where you are raffling it off. If you fill out this follow-up survey, we're gonna raffle off an iPad to one of the first 100 people who do it. Is there an opportunity to move beyond? 0:04:48.0 DG: Second issue I wanna talk about goes back to that "always intervening" versus "sometimes intervening." It's not currently published, but there is emerging research I think will be published soon. And most researchers are going to a ratio score of intervening. So, instead of asking, "Do you intervene: Yes or no?", asking about a number of scenarios like, "Here are 10 scenarios, how many of them did you witness? Of those that you witnessed, how many did you intervene?" So, if I ask you about 10 scenarios and you say, you intervened, you witnessed three of them, that would be my three in my model, but you only intervened in one, I would say that your intervention score is one over three or 0.33. So, that you intervene 33% of the time versus 100% of the time, versus 14 or zero. So, this is becoming a best practice that I think practitioners can start to implement as well when they talk about changes in behaviors: That we have to consider what people witness, and then whether they intervene or not in that... Someone who has a score then of 100% but only saw one thing, versus someone who has a score of 50% but they witnessed 500 things. There's the limitation to this measurement moving forward, but we think it's a step in the right direction. 0:06:11.8 DG: And then last but not least, I have this big question mark. And I'm having this stand 'cause it says "reality and actuality," that something that's emerging in my research and dataset is that we're having some misclassification issues. Meaning, that when we use close-ended categories for measurement. So, "This scenario happened, what did you do?" This is common, right? "I did nothing," "I went and got help," "I tried to separate them," "I created a distraction," "I physically intervened." So, these are common categories that people might use. So, we did a study where we asked these things, but then we went further and said, "Can you explain what you mean when you said you physically intervened?" And students are starting to write things like, "I would tell them to stop." Well, telling someone to stop is not physically intervening. That's a verbal strategy. That would be a direct strategy. 0:07:07.1 DG: And a lot of the time, these things are co-occurring, or sometimes there's three or four different strategies: "I created a distraction, I separated someone, then I went to get help." When we only give people single categories, are we getting anywhere near the reality of their scenario? So, a suggestion for that is to consider more open-ended and qualitative approaches to allow people to check more than one option. "What did you do in this scenario? Check all that apply." That's like the easy low-hanging fruit option. And then even, because we're looking at reality and actuality, maybe asking some what-happened-next questions. "So, what happened after you did that? Was it positive? Was it negative? Was it helpful, harmful? Did you feel good about it?" These are things that we could start considering on our own campuses and communities as well. 0:08:01.0 SL: Thank you for listening to this episode of Resource on the Go. For more resources and information about preventing sexual assault, visit our website at www.nsvrc.org. To learn more about measuring bystander intervention, visit the episode resources at nsvrc.org/podcasts. You can also get in touch with us by emailing resources@nsvrc-respecttogether.org. [music]