Evaluate Actions:
Key Activities

Activity 1 of 11

Prepare to Evaluate

The Take Action Cycle shows evaluation at the end of the cycle, but in reality, evaluation should be incorporated throughout your process of improving your community’s health.

As you work to build a healthy community, your evaluation will take on different purposes. 

This guide focuses on both program and policy advocacy evaluation. Because changing policy is complex, multi-step and incremental, what you measure and the evaluation tools you use are different from program evaluation. We’ll provide guidance on where advocacy/policy evaluation diverges from program evaluation throughout this guide.

The time, money, and resources you spend on evaluation are an investment in your overall efforts. Evaluation can be used for learning about what’s working so you can replicate or expand your efforts or for accountability to demonstrate that your policy or program is having the intended impact. Evaluation is as much a mindset as an event or activity. Take time to talk about how each person in your team views evaluation and how they value the process of using reflection and dialogue to gain insight about information gathered.

There are a few things you’ll want in place as you embark on an evaluation process:

Understanding of your readiness 

Are you ready to evaluate? The Evaluability Assessment helps determine your program’s readiness.

Resources to support an evaluation

Evaluation cost estimates range from 5 to 7 percent or 15 to 20 percent of your total budget.1,2 How much you need to invest in evaluation depends on a variety of factors, including:

  • The scope of your policy or program,
  • The number of outcomes you want to assess,
  • Who conducts the evaluation, and
  • Your available evaluation-related resources. 
A core team to guide and inform the evaluation

Evaluation is a team effort. Assemble a core evaluation team that knows the community, will ask good questions, is thoughtful and reflective, and committed to developing and guiding the process with as much neutrality as possible.3

Clarity about who will lead the evaluation 

You will need to determine who will do the evaluation (e.g., your core team, staff, or an outside evaluator).

If you’re embarking on an innovative strategy and the path ahead is unclear, you may want to consider engaging someone experienced in developmental evaluation. The developmental evaluator is part of the team that is designing and testing the innovation. Their primary role is to bring evaluative thinking into the process. A Developmental Evaluation Primer describes the skills required (pp. 40-45) for this approach.

Shared understanding of how and when the evaluation will be used

When your team is ready to evaluate either a program or an advocacy/policy initiative, decide together how and when you will use evaluation. Following are some common ways communities use evaluation.

  • To gain insight (also called formative evaluation):
    • Assess the level of community interest in a desired policy or program, and use that information to plan how to implement it.
    • Identify challenges to and opportunities for a desired policy or program, and use that information to advocate for it.
  • To improve a policy or program (also called process evaluation):
    • Monitor the implementation of your selected policy or program, and use the results to enhance components of the policy or program.
    • Survey your target audience, and use that information to improve the content and delivery of your communication, policy, or program.
    • Assess staffing needs and internal systems (e.g. social media marketing skills, communication systems) to improve organizational capacity to deliver results.
    • Assess the partnership to continuously improve and strengthen its capabilities.
  • To support innovation in an uncertain environment (also called developmental evaluation)
    • Reflect on feedback from an evaluator to help conceptualize, design and test new strategies.
    • Collect data about how a selected strategy is unfolding in real time in order to decide whether to abandon or continue in that direction.
  • To evaluate policy or program effects (also called impact, results, or outcome evaluation):
    • Measure the extent to which your outcome indicators are being met, and use the results to improve your policy or program and be accountable to your funders.
    • Use information about which target populations benefited most from your policy or program to target future efforts more effectively.
    • Use outcomes to be accountable to your community and to your decision makers.

As you work with your core team to clarify how and when evaluation will be used, consider:

  • What do stakeholders want to know? How will they use the data?
  • What does your community need and want to know? How will key leaders be engaged in shaping the evaluation plan and sharing results? How will you incorporate those who are most vulnerable, those who are experiencing the worst conditions for good health, in the creation of your evaluation methods and processes?4
  • What do funders require?

1.    Office of Planning, Research and Evaluation,. Chapter 2: What is Program Evaluation? In: The Program Manager’s Guide to Evaluation. Second ed. Washington, DC: U.S. Department of Health and Human Services; 2010.

2.    Frances Dunn Butterfoss. Evaluating Coalitions and Partnerships. In: Coalitions and Partnerships in Community Health. San Francisco, CA: Jossey-Bass; 2007.

3.    Office of Planning, Research and Evaluation,. Chapter 3: Who Should Conduct Your Evaluation? . In: The Program Manager’s Guide to Evaluation. Second ed. Washington, DC: U.S. Department of Health and Human Services; 2010.

4.    Sonali S. Balajee, et al. Equity and Empowerment Lens (Racial Justice Focus). In. Portland, OR: Multnomah County; 2012.

Activity 2 of 11

Build Consensus Around an Evaluation Plan

A successful evaluation process begins by engaging those with a vested interest your selected policy or program. Take some time to brainstorm who your stakeholders are before you create your evaluation plan. 

Identify stakeholders

Each type of stakeholder will have a different perspective on your policy or program as well as what they want to learn from the evaluation. You can group stakeholders into four main categories, depending on your specific policy or program.

  • Implementers: those involved in making the policy or program happen.
  • Partners: those who actively support the policy or program.
  • Participants: those served or affected by the policy or program.
  • Decision makers: those in a position to do or decide something about the policy or program.¹

As you consider who to involve from each stakeholder group, think about how you can identify, support, and include diverse participants so the evaluation design reflects different perspectives. How do people of different status see the problem and therefore potential solutions? Think about how you can create an inclusive process that will build shared understanding of the community values that are the context for your program or policy initiative.

Decide evaluation roles

Once you’ve identified who should be at the evaluation table with your core evaluation team, decide how each stakeholder should be involved. Involving everyone in each step is unwieldy. Decisions about stakeholder involvement are not easy but can be made according to needs and interests, authority, control of resources, time availability, or specific knowledge or skills. Certain stakeholders might be key for certain stages of the process.¹

The CDC’s Physical Activity Evaluation Handbook includes questions for stakeholders that can be used throughout the evaluation process. It can help you understand stakeholders’ interests in your efforts and identify evaluation questions.

  • What is important about this policy or program?
  • Who do you represent, or why are you interested in this policy or program?
  • What would you like this policy or program to accomplish?
  • What are the critical evaluation questions?
  • How will you use the results of this evaluation?
  • What resources (e.g., time, evaluation experience, funding) can you contribute to this evaluation?¹

1.    US Department of Health and Human Services. Physical Activity Evaluation Handbook. In: US Department of Health and Human Services, editor. Atlanta, GA: Centers for Disease Control and Prevention; 2002.

Activity 3 of 11

Decide What Goals are Most Important to Evaluate

Strong evaluations begin with a clear, visual map showing how strategies will achieve change. Look at the goals and plans you developed for your policy or program. Why did you decide to use certain strategies? What outcomes were you expecting as a result? 

If you haven’t developed a logic model or grounded your actions in a theory of change, now is the time to go back to Choose Effective Policies & Programs and clearly define your goal.

Evaluating advocacy and policy initiatives

If you are evaluating an advocacy or policy initiative, the Advocacy Progress Planner uses a logic model framework to help you think through a menu of goals, outcomes, and strategy options. Use the online planner to:

  1. Define the impacts and policy goals your advocacy strategy is trying to achieve.
  2. Identify activities and tactics you are using to achieve the impacts and policy goals.
  3. Identify interim outcomes.
Use short-, medium-, and long-term goals to demonstrate progress over time

If your program or policy efforts only focus on long-term goals (such as reducing obesity), you will not be able to demonstrate progress and your initiative will likely lose momentum. If you also think about short- or medium-term goals (such as increasing children’s fruit and vegetable consumption by adopting a farm-to-school policy in your school district), you will be able to measure progress throughout your initiative.

Since policy goals usually take years to accomplish, it’s especially important to measure interim goals along the way so that advocacy efforts aren’t unfairly judged as unsuccessful.

Activity 4 of 11

Determine Your Evaluation Question(s)

Based on the goals you determine to be most important to evaluate, what evaluation questions do you want to answer? 

Program evaluation

Planning and Implementation: How well was the program planned? Is it being implemented as intended? This is often done through ongoing monitoring of key indicators. Sample Questions:

  • What are we doing? When? Where? How much?
  • Are we on track with time and resources?
  • Are we delivering the program as planned? If not, why has it varied?
  • Are we reaching the intended population of people?
  • Did the program evolve during the process? How? 

Impact: Did the program have the intended results? Data collected as part of your ongoing process evaluation may inform your impact evaluation. Sample Questions:

  • What is different as a result of our actions? How are the people we’re trying to help different as a result of what we did?
  • What did we accomplish? Did we achieve our outcomes? Why or why not?
  • Were there any unintended effects of the program? 
Advocacy/policy evaluation

Advocacy/policy evaluation questions may differ from program evaluation questions since the outcomes are different, with one set of outcomes often contingent on others. (For example, in order to provide people with equal access to high-quality care, you first have to build public and political will to make policy and systems changes.)

Depending on which of the following six outcome categories you are evaluating, you might have different questions. Be prepared to revise these questions as the evaluation proceeds.

Outcome categories Sample questions

Shift in social norms

  • How did beliefs, attitudes, public behavior change?
  • How effective is our media strategy in reframing the policy issue?

Strengthened organizational capacity

  • Does the advocacy organization have improved capacity to communicate and promote advocacy messages?
  • Do community members have this capacity?

Strengthened alliances

  • Are there an increased number of partners supporting an issue?

Strengthened base of support

  • Is the advocacy effort increasing public will among its target audiences?

Improved policies

  • Is there adequate funding and other resources for implementing a policy?

Changes in impact

  • Were social and physical conditions improved?¹

 

1.    Organizational Research Services, Jane Reisman, Anne Gienapp, Sarah Stachowiak. A Guide to Measuring Advocacy and Policy. In: Identify Your Outcome Categories: Prepared for the Annie E. Casey Foundation, Baltimore, MD; 2007. p. 16-22.

Activity 5 of 11

Identify Indicators and How to Collect Data

Once you’ve decided which goals you will evaluate and the evaluation questions you need to answer, you’ll want to think about indicators (i.e., specific process or impact measures) and sources of data. For each evaluation question you pose, you will need indicators to answer that question.

Process measures

Process measures are activities that take place during the initiative that help you determine how well things are going. Examples include:

  • Activities: number of meetings with policymakers, number of classes or workshops held
  • Participation: number of participants, frequency of participation
  • Enforcement: number of variances from established protocols, number of citations issued for breaking laws
  • Communication: number of media stories, letters to the editor or op-eds about your efforts, number of people on your email or mailing list, number of messages sent using email or mailing list
Impact measures

Impact measures explain the overall impact that occurs as a result of your actions. Outcome measures highlight the changes that happen in the community as a result of the work done by your initiative. Examples include:

  • Participant-level indicators such as changes in knowledge, behavior, or perceptions of an issue
  • Community-level indicators relevant to your priority-issue such as changes in environments, laws, cultural norms, health status indicators

The Community Tool Box provides a helpful overview of how to find and use community-level indicators. As you select your indicators, consider whether they are available, accurate, fairly easy to collect and relevant to the initiative.¹

If you are evaluating an advocacy/policy initiative, consider the four types of measures included in the Advocacy Progress Planner:

  1. Impact measures show the effects of policy goals for the programs, systems, or populations that policies aim to improve.
  2. Policy goal measures signal whether policy goals have been achieved.
  3. Activity/tactic measures count what and how many advocacy activities or tactics were produced or accomplished. While these measures are easy to capture, they don’t explain how well the tactic worked.
  4. Interim outcome measures signal progress toward achievement of policy goals, capturing the changes in the target audience as a result of the advocacy effort.
Which measures to use?

As you think about which measures to use, be sure to pick the most meaningful and useful ones. More data doesn’t necessarily mean better data; too much can be overwhelming. Be clear about how you will use each piece of data. Keep these questions in mind:

  • How well does the measure link to the strategy? Does it capture the strategy’s effects?
  • Are data currently being collected? If not, what are the costs of collecting additional data and are these costs worth it?
  • Is the measure important to most people? Will it provide sufficient information to convince both supporters and skeptics?
  • Is the measure quantitative or qualitative? While numerical indicators are often useful and understandable, sometimes qualitative information (e.g., a participant’s story) is more relevant and important.
Where will you get your data?

Sources of data include people, documents, observations, or existing data sources. Following are some considerations for selecting data sources:

  • Use different types of sources to assess different perspectives.
  • State your criteria for selecting sources.
  • Use both qualitative and quantitative sources.
  • Collect data from enough people to make results reliable but not from so many that data collection is impractical.
  • Consider oversampling from certain populations so that you can get an accurate picture of disparities.
  • Estimate in advance the amount of data you will collect (consider consulting professional help).
  • Minimize the burden on respondents (e.g., don’t make the survey or interview too long).²

1.    KU Work Group for Community Health and Development. Chapter 38, Section 9: Gathering and Using Community-Level Indicators. In: The Community Tool Box. Lawrence, KS: University of Kansas; 2012.

2.    US Department of Health and Human Services. Physical Activity Evaluation Handbook. In: US Department of Health and Human Services, editor. Atlanta, GA: Centers for Disease Control and Prevention; 2002.

 

Activity 6 of 11

Identify Benchmarks for Success

Before collecting data, decide on the expected effects of the policy or program on each indicator. This “goal” for each indicator is your benchmark for success. Benchmarks should be achievable, but challenging.

Benchmarks for success are most often based an expected change from a known baseline.

Tips for setting benchmarks
  • Consider how far along the policy or program is in implementation, your logic model, and your stakeholders’ expectations.¹
  • Past performance can help you know how much change to expect. 
  • Use the data you collected in the Assess Needs & Resources step to help set your benchmarks for success. If you don’t have data on past performance, you may want to wait until you have baseline data before specifying your benchmark.
  • Another way to avoid setting arbitrary benchmarks is to review the impacts of comparable policies or programs, review goals set by other credible organizations (e.g., Healthy People 2020), and/or look to the evaluation literature for parallels.²

In advocacy/policy evaluation, it may be helpful to identify milestones (vs. “performance benchmarks”) along the way, to keep the momentum going. For example, you might have a long-term goal to increase the state funding for early childhood education by $75 million. But if you're able to secure a $10 million increase through your advocacy efforts after only a year, that would be an important milestone achievement.

1.    US Department of Health and Human Services. Physical Activity Evaluation Handbook. In: US Department of Health and Human Services, editor. Atlanta, GA: Centers for Disease Control and Prevention; 2002.
2.    Michael Quinn Patton. Utilization-Focused Evaluation. Third ed. Thousand Oaks, CA: Sage Publications, Inc.; 1997.

Activity 7 of 11

Establish Data Collection and Analysis Systems

Use your evaluation plan to establish data collection and analysis systems, determine who will collect and use the data, and decide how you will analyze the data to provide insights into your policy or program.

A collaborative approach involving diverse parties, including those most affected by the issue, will build ownership in the evaluation results, increase stakeholders’ evaluation skills, and increase the likelihood evaluators will be sensitive to participants. 

Steps for collecting data

Clarify what data will be collected, from whom and how best to get it.

  • Make sure the data you plan to collect will help you answer your evaluation questions.
  • Decide what type of information is easily obtained from other sources and what new data you will need to collect.
  • Consider the data you collected in the Assess Needs & Resources step. Would collecting from these same sources help answer some of your evaluation questions?
  • Consider using a combination of qualitative and quantitative methods to give your team the best picture of whether your actions are effective. Assessing Community Needs & Resources in the Community Tool Box provides different methods for collecting information.

Develop clear procedures for gathering, analyzing and interpreting data.

  • What will be reviewed and how often?
  • Build in considerations for different cultural perspectives of respondents, such as language and literacy needs.

Develop data collection forms/instruments.

  • Make sure your forms and instruments measure what you want to measure and that they will provide similar answers with the same population even if administered at different times or places.
  • Surveys are commonly used to collect evaluation information from participants in a program or those affected by a policy.
  • Develop training and technical support for those who will collect and analyze the data.

Pilot data collection processes so they can be improved prior to full implementation.

Collecting data for advocacy/policy initiative evaluation

Deciding what data to collect for advocacy/policy will be challenging because of the dynamic nature and environment of policy work which makes it hard to measure.

Consider non-conventional methods (e.g., surveys, interviews, document review, observation, polling, focus groups, case studies) in addition to traditional data collection methods. Use A Handbook of Data Collection Tools to select specific data tools that align with the outcomes you want to measure.

  • Shift in social norms (Suggested tools: interviews, focus groups, meeting observation checklist, rolling sample survey)
  • Strengthened organizational capacity (Suggested tools: Advocacy Capacity Assessment, spider diagram)
  • Strengthened alliances (Suggested tools: Intensity of Integration Assessment)
  • Strengthened base of support (Suggested tools: Increased Engagement of Champions, Checklist for Mobilization and Advocacy)
  • Improved policies (Suggested tools: Legislative Process Tracking, Monitoring Policy Implementation, Changes in Physical Environments)
Additional tools specifically for policy/advocacy evaluation 

Unique Methods in Advocacy Evaluation highlights these four tools for advocacy evaluation:

  1. Bellwether Method – structured interviews with influential people
  2. Policymaker Ratings – assessment of policymakers’ support for issues
  3. Intense Period Debriefs – discussion of actions after a policy window or intense action
  4. System Mapping – visual map of what is expected to change in a system and how to measure it

If you are evaluating media strategies, you may be interested in whether the media coverage of an issue increases over time. Typically, this requires using an online database like LexisNexis©, a news tracking service that offers a searchable database at the national, state, and local level. If you want to see if your media strategy has changed how media covers an issue, then you’ll need to do a content analysis of articles which requires more time and resources.

Activity 8 of 11

Collect Credible Data

Now that you’ve developed your data collection and analysis systems it’s time to put them to work.

Start collecting data
  • Train people in how to collect the data.
  • Track and organize the data as you go, both to keep the evaluation in progress and to protect the confidentiality of the data.
  • Collect only data that will be used and use all data collected.
  • Check in throughout the data collection process to reflect on what’s working, what could be improved, and whether you have answered your original evaluation questions.
Assure data quality

Throughout the data collection process, also periodically stop to review the quality of the data you’re gathering. Some questions to consider:

  • Do the data reflect the people who live in the community? The demographics of respondents should match the demographics of the priority population.
  • Do the data reflect the behavior of the priority population? Are you measuring short- and medium-term outcomes of behavior change?
  • Are the data plausible? Sometimes sampling strategies don’t detect what is actually happening and other methods may be needed.¹
Monitor progress toward achieving benchmarks

As you collect data for your evaluation, make time to occasionally check in on the goals and benchmarks you established in your evaluation plan. How are you doing on your short-, medium-, and long-term goals? 

If you’re engaged in advocacy campaigns, it’s as important to celebrate “wins” as it is to understand how you got there. Use Assess Effectiveness of Advocacy Efforts to establish, track, and celebrate benchmarks that reflect different inputs that go into an advocacy campaign.

1.    Frances Dunn Butterfoss. Evaluating Coalitions and Partnerships. In: Coalitions and Partnerships in Community Health. San Francisco, CA: Jossey-Bass; 2007.

Activity 9 of 11

Review Evaluation Results and Adjust Your Policy Implementation or Program(s)

You can use your evaluation results to make recommendations for continuing, expanding, redesigning, or abandoning your policy or program.

Go back to your initial assessment and problem definition. Are your efforts are having an impact on the problem you set out to address? Is the original problem definition still accurate?

As you think about recommendations, you may want to revisit the work your team did to develop your evaluation plan, including your evaluation purpose and intended use.

Create a feedback loop

This is an important time to re-engage stakeholders and solicit their feedback. Following are some tips for doing so:

  • Consider your stakeholders’ values and align recommendations.
  • Share draft recommendations with stakeholders and solicit feedback.
  • Relate your recommendations to the original purposes and uses of the evaluation.
  • Target your recommendations appropriately for each audience.¹

Once you’ve vetted your recommendations with stakeholders, you can begin to use the evaluation findings to adjust your policy and program as necessary.

1.    US Department of Health and Human Services. Physical Activity Evaluation Handbook. In: US Department of Health and Human Services, editor. Atlanta, GA: Centers for Disease Control and Prevention; 2002.

Activity 10 of 11

Share Your Evaluation Results

Sharing your results is an important part of your evaluation. Sharing your results raises community awareness and helps to mobilize and maintain support.

An evaluation report can be used to:

  • Guide decisions about future policy and program implementation.
  • Tell the “story” of your efforts and demonstrate the impact of the policy or program.
  • Advocate for your efforts with potential funders.
  • Help other communities learn from your experiences.¹
  • Contribute to the knowledge base about what works and what doesn’t work.
  • Show that policy or system change can effectively impact individual behaviors and health outcomes.

Consider different reporting strategies depending on your purpose and audience.

1.    Office of Planning, Research and Evaluation,. Chapter 9: How Can You Report What You Have Learned? In: The Program Manager’s Guide to Evaluation. Second ed. Washington, DC: U.S. Department of Health and Human Services; 2010.

Activity 11 of 11

Evaluate Your Partnership and Make Changes

Evaluating your partnership allows you to be sure that what you are doing is working the way you intended and that your partnership is as effective as possible. 

Meaningful evaluation should increase the effectiveness of the partnership’s process as well as enhance the outcomes of the partnership’s work.¹

Tools included here provide some assessments to help you understand how you might work together better. Evaluating your partnership’s objectives, activities, processes, and unanticipated events is also important. Step eight of Developing Effective Coalitions: An Eight Step Guide (pp. 24-25) provides more detail about evaluating your partnership’s efforts.

Measuring effectiveness when so many partners are involved can seem challenging.² The key is to stay focused on clear, shared goals and the commitment to a collaborative relationship. Using reflective practices such as After Action Review, and What, So What, Now What? can help create “feedback-rich” environments that strengthen partnerships’ capacity for learning, change, and improvement.

1.    Sonali S. Balajee, et al. Equity and Empowerment Lens (Racial Justice Focus). In. Portland, OR: Multnomah County; 2012.

2.    Jeff Raikes, Tom Tierney, Schorr L. Funding for Impact: How to Know When Your Giving is Getting Results webinar. In: SSIR Live! webinar series, retrieved from http://www.ssireview.org/webinars; Dec. 10, 2013.