By
Published
June 3, 2021
Tags
At Schusterman Family Philanthropies, evaluation and learning are essential tools for the success of the Schusterman Fellowship and REALITY, two of our operating programs. By understanding how our programs are experienced by participants, we can iterate our program plans to ensure they are relevant and valuable.
But at the start of 2020, the unexpected impact of COVID-19 forced us to halt our program evaluations. This was a new experience for us. While we are accustomed to creating time for reflection and learning in our evaluation processes, never before have we pressed pause on the entire operation. Doing so felt scary at the time, but the sudden downshift turned out to be highly useful for our team.
By allowing for a full stop on evaluating our programs, meaning there were no open surveys in the field and no impending deadlines to meet, we were able to assess our full approach to evaluation without the pressure of turning around a deliverable. We devoted time to revisiting and refining our logic models where appropriate, paring down our survey and interview protocols to remove excess questions, and re-creating learning questions [PDF] that could help us understand the effects of shifting to a fully virtual programming model.
Through these exercises, we uncovered three lessons on how to better support our participants during the pandemic and beyond:
1. Ask tough questions about your evaluation strategy.
The sudden pause in our evaluation activities created room for us to ask ourselves hard questions, including: Is this round of data collection essential? What is most important for us to learn right now? How can we learn without burdening participants? How is our evaluation process advancing equity? In tackling these questions head-on, we were able to refine our evaluation tools in a way that better aligned with our program goals.
Example:We took a step back from our evaluation work to reground in our logic model and question whether anything has changed because of the pandemic; it did not, but we did reprioritize where we were focusing our evaluation. With the Schusterman Fellowship, for instance, we shifted from surveying Fellowship cohorts at 12, 24, and 36-month intervals to sending out a survey to all cohorts annually. This shift created a more streamlined data collection process for our evaluation strategy moving forward.
2. Treat evaluations as another form of communication.
Evaluation now is more than just about measurement; it provides an opportunity for care and connection. The adverse effects of the pandemic reminded us to consider participants’ mental and emotional state when reaching out for their feedback. The process (how we ask questions and when), is just as important as the outcome (the data we receive).
Example: Rather than send three to five generic reminder emails about completing a survey, we wanted to use those touchpoints as an opportunity to check in with our community. We offered more flexibility by providing more time to complete the survey, making participation voluntary (in some cases) and acknowledging the current state of our world within the survey questions. Our goal was to help participants feel seen and supported and to integrate the evaluation into the context of our participants’ lives. We made these changes when surveying both our Schusterman Fellows and our REALITY alumni. As a result, we saw relatively stable to increased participation levels.
3. Consider creative ways for collecting data.
While data collection processes are meant to improve the quality of our programs, the cadence and format of evaluations can sometimes feel cumbersome. As Schusterman continued to run virtual programming during COVID-19, we decided to test different data collection methods that would provide the data we needed without burdening our participants.
Example:Following one of the virtual gatherings for the Schusterman Fellowship, we sent participating Fellows a brief, 2-question survey to quickly gauge gut reactions to the event. Although only 4-5 individuals responded, the feedback we received was fast, honest and helpful for our planning of upcoming virtual events. For future evaluations, we are considering soliciting feedback through participant video diaries, town halls and peer-to-peer interviews.
When measuring the impact of programs, evaluators seek to learn what worked and what can be improved. Within that process, it is important to assess how evaluation structures are designed. Although this type of assessment requires increased time during program planning, it is critical to incorporate space for pause, reflection and learning in order to collect data that meets the changing needs of both the program planners and program participants.
Moving forward, our aim for the Schusterman Fellowship and REALITY is to integrate opportunities allowing us to ask thoughtful questions that improve our evaluation design, strengthen our capability to approach our work with empathy and compassion, and expand our thinking around how we can gather data in creative and effective ways.
What have you done to review or pivot your evaluation during the pandemic? What lessons are you carrying forward?
Looking for more insights on strengthening your evaluation strategy? Check out our guide to diversity, equity and inclusion in data collection.