Addressing the underlying issues of watershed health – such as improving water quality and wildlife habitat – often requires voluntary actions by people. In other words, people have to do something differently.
The ultimate goal of many watershed outreach programs, whether implicit or explicit, is to motivate people to take an action or make a change to achieve biophysical goals for watershed health. Therefore, evaluating the effectiveness of outreach programs is important to know if the strategy is working.
“Just as we need to monitor and evaluate changes in water quality, we also want to monitor those changes in behavior that are going to result in our desired outcomes,” said Adam Reimer, evaluation and outreach scientist with National Wildlife Federation. “We need feedback about what strategies, what events, what kind of programming are resulting in changes in behavior.”
The problem is, not only are few watershed professionals trained in how to do evaluation, behavior change evaluation is chronically undervalued and underfunded.
“We don’t do evaluation in a rigorous way because funders and programs tend to not support it properly,” said Reimer.
So, what is a watershed professional to do if they want to evaluate their outreach program but aren’t sure where to start or have few resources to make it happen?
First, begin with a clear goal in mind.
Ask yourself questions such as what is my desired outcome or the behavior change I want to see, what is my theory of change behind those outcomes, and how is my programming going to sustain behavior change over time. It’s also important to think through what you are trying to accomplish with both an individual program and your overall outreach strategy.
With a clear goal, it can be easier to identify key metrics that are practical to collect and that will give you an indication of whether your program is encouraging people to take the actions necessary to improve the health of your watershed.
For example, if you’ve organized a field day to increase farmers’ interest in adopting a certain conservation practice, you could measure indicators such as attitudes about the practice, confidence in their ability to implement it, or their willingness or intentions to adopt it following the event.
An important caveat is the inherent difficulty of pinning down the impact of a single program or event.
“Human behavior is incredibly complex, and there are lots of things that influence an individual’s behavior or a community’s behavior,” said Reimer.
For example, a farmer’s decision to adopt the practice could occur over the course of one or many growing seasons and may be influenced by a lot of factors outside of what they learn at a single field day. Moreover, we often can’t observe the actual behavior change directly, and instead rely on participants’ intentions or self-reported changes.
Nonetheless, Reimer emphasizes that some data is better than no data and collecting data on your program as soon as possible will help you in the long run.
A common way to capture evaluation data is through a short questionnaire at the end of an event about what participants learned or took away with them. Keeping your questions as consistent as possible over time will enable you to compare your data over time.
Focus on what data are absolutely necessary to measure progress toward your goal, rather than “nice to have” metrics that you wouldn’t be sure what to do with afterwards, which is a common pitfall, according to Reimer. Picking a few strong metrics will get you farther than just going with the data that are easiest to collect or throwing in the whole kitchen sink.
Not only will a focused approach improve the odds that survey-overloaded people will take the time to respond, but it might also save your future self the headache of having more data than you really need.
“People’s time and attention are scarce, and we as professionals don’t always have a lot of time to manage, collect, analyze, and report data,” said Reimer.
Reimer also suggests doing a follow-up evaluation weeks or even months after the program or event to see what has stuck with people, since a questionnaire immediately afterwards represents only a snapshot in time. A delayed evaluation may be a better indicator of what participants have actually done differently, versus what they thought they might do at the time.
If you’re seeking additional information about designing an evaluation process, the Internet is awash in resources. Evaluation is an entire field in and of itself, and while you will find a lot of information that is outside the natural resources realm, many of the concepts are still applicable to watershed outreach programming.
Evaluating outreach programs was the topic of a recent Life Hacks over Lunch, the peer learning meet-up series organized by The Confluence for Watershed Leaders. Here are some of the ideas that arose from the lively discussion:
- The Theory of Planned Behavior and concepts from social marketing can help you think about your behavior change goals.
- Make sure to apply an evaluation lens to your overall programming, rather than just a single event.
- The SIDMA tool can help you identify social Indicators related to watershed-based efforts to use as metrics in your evaluation.
- Consider using metrics that express emotion.
- Build an evaluation question or discussion into your program. For example, you could use a QR code in a presentation that links to a survey for participants to complete on the spot. Survey tools such as Slido or Mentimeter can come in handy here.
- Use easy, visual means to collect data, such as uploading photos or voting with marbles.
Header image courtesy University of Kentucky Cooperative Extension