Once you’ve decided which goals you will evaluate and the evaluation questions you need to answer, you’ll want to think about indicators (i.e., specific process or impact measures) and sources of data. For each evaluation question you pose, you will need indicators to answer that question.
- Process measures are activities that take place during the initiative that help you determine how well things are going. Examples include:
- Activities: number of meetings with policymakers, number of classes or workshops held
- Participation: number of participants, frequency of participation
- Enforcement: number of variances from established protocols, number of citations issued for breaking laws
- Communication: number of media stories, letters to the editor or op-eds about your efforts, number of people on your email or mailing list, number of messages sent using email or mailing list
- Impact measures explain the overall impact that occurs as a result of your actions. Outcome measures highlight the changes that happen in the community as a result of the work done by your initiative. Examples include:
- Participant-level indicators such as changes in knowledge, behavior or perceptions of an issue
- Community-level indicators relevant to your priority-issue such as changes in environments, laws, cultural norms, health status indicators
The Community Tool Box provides a helpful overview of how to find and use community-level indicators. As you select your indicators, consider whether they are available, accurate, fairly easy to collect and relevant to the initiative.¹
If you are evaluating an advocacy/policy initiative, consider the four types of measures included in the Advocacy Progress Planner:
- Impact measures show the effects of policy goals for the programs, systems, or populations that policies aim to improve.
- Policy goal measures signal whether policy goals have been achieved.
- Activity/tactic measures count what and how many advocacy activities or tactics were produced or accomplished. While these measures are easy to capture, they don’t explain how well the tactic worked.
- Interim outcome measures signal progress toward achievement of policy goals, capturing the changes in the target audience as a result of the advocacy effort.
As you think about which measures to use, be sure to pick the most meaningful and useful ones. More data doesn’t necessarily mean better data; too much can be overwhelming. Be sure you’re clear about how you will use each piece of data. Keep these questions in mind:
- How well does the measure link to the strategy? Does it capture the strategy’s effects?
- Are data currently being collected? If not, what are the costs of collecting additional data and are these costs worth it?
- Is the measure important to most people? Will it provide sufficient information to convince both supporters and skeptics?
- Is the measure quantitative or qualitative? While numerical indicators are often useful and understandable, sometimes qualitative information (e.g., a participant’s story) is more relevant and important.
Where will you get your data? Sources of data include people, documents, observations, or existing data sources. Following are some considerations for selecting data sources:
- Use different types of sources to assess different perspectives.
- Clearly state your criteria for selecting sources.
- Use both qualitative and quantitative sources.
- Collect data from enough people to make results reliable but not from so many that data collection is impractical.
- Consider oversampling from certain populations so that you can get an accurate picture of disparities.
- Estimate in advance the amount of data you will collect (consider consulting professional help).
- Minimize the burden on respondents (e.g., don’t make the survey or interview too long).²
1. KU Work Group for Community Health and Development. Chapter 38, Section 9: Gathering and Using Community-Level Indicators. In: The Community Tool Box. Lawrence, KS: University of Kansas; 2012.
2. US Department of Health and Human Services. Physical Activity Evaluation Handbook. In: US Department of Health and Human Services, editor. Atlanta, GA: Centers for Disease Control and Prevention; 2002.