Community Tobacco Prevention and Control Toolkit Evaluation

Loading...
evaluation

Evaluation Basics

Evaluation is a formal process for collecting, analyzing, and interpreting information about the effectiveness of your tobacco prevention and control program. It is a tool used to assess the way a program is put into effect. Evaluation measures the outcomes of the program, its efficiency and impact over time.

Why Evaluate?

Evaluations are conducted for four main reasons:

  • To improve the program or some aspect of the program
  • To measure the program’s effectiveness
  • To demonstrate the effective use of resources
  • To demonstrate accountability

 

Logic models provide a picture of how your program works. Logic models begin with the inputs to the program and move through its processes and activities to program outputs and to short-term and long-term outcomes.

The Centers for Disease Control and Prevention (CDC) developed logic models for prevention, promotion of quitting, elimination of exposure to secondhand smoke, and elimination of tobacco-related health disparities. The CDC logic models are based on strong scientific evidence for what works in tobacco prevention and control. Based on the community needs assessment, your coalition should decide what program areas to pursue and the population most at risk for tobacco-related diseases. Then the CDC logic model can be tailored for your situation.

The logic model for preventing initiation of tobacco use among young people shows how program inputs can be used to produce different categories of activities that lead to program outputs. Program outputs, in turn, lead to short-term outcomes. Short-term outcomes are sometimes called intervening variables and include knowledge, attitudes, policies and law enforcement.

The CDC has also developed indicators for all outcome elements of the logic model. These indicators are found in state surveillance systems including the Youth Tobacco Survey and the Behavioral Risk Factor Surveillance System. In Texas, data on the indicators are collected by public health region and metropolitan statistical area. Over-sampling among tobacco program-funded coalitions will allow tracking of these indicators. In general, each community will need to collect data related to its inputs, activities, and outputs as these are community-specific.

Indictors for elements of other logic models can be found at http://www.cdc.gov/tobacco/tobacco_control_programs/surveillance_evaluation/key_outcome/00_pdfs/Key_Indicators.pdf.

Developing an Evaluation Plan

The first step in developing an evaluation plan is clarifying the purpose of the evaluation. The purpose will guide the development of a systematic plan, stating exactly what is to be evaluated.

The evaluation plan involves:

  • Choosing the evaluation type and design
  • Writing clear objectives to focus the evaluation
  • Creating a timeline of tasks and resources to be used in the evaluation
  • Collecting and interpreting data to determine whether the objectives have been met
  • Reporting the results to the stakeholders

 

Evaluation Types

There are three types of evaluation. Formative evaluation is used to test program plans, messages, materials and strategies during the development of the program and while the program is being implemented. Process evaluation occurs while the program is being implemented to determine if it is being delivered as planned. Outcome evaluation occurs after the program is completed and looks at the end results, or the outcomes of the program.

Formative evaluation is conducted during the development of the program to ensure that program materials, strategies, and activities are of the highest possible quality. Formative evaluation ensures that materials, strategies, and activities are feasible, appropriate, meaningful, acceptable and culturally appropriate for the program and its priority population. This type of evaluation should be used during development of a new program. Formative evaluation can also be used when an existing program is being adapted for use with a different priority population or in a new location or setting.

Typical Questions Answered by a Formative Evaluation

  • Are the proposed activities suitable for the priority population?
  • Are the proposed plans and strategies likely to succeed?
  • When is the best time to introduce the program to the priority population?
  • How much publicity and staff training are needed?
  • Are sufficient resources to carry out the program available?
  • Are the program hours and location acceptable?
  • Are staff members comfortable with their roles?
  • Are there beliefs among the priority population that work against the program?

Process evaluation, also called implementation evaluation, is a special type of formative evaluation. The purpose of process evaluation is to learn whether the program is being delivered as planned, for example, whether the number of people served is more or less than expected, whether the people served are members of the priority population, and whether the program is being delivered according to the intended plan or protocol. Process evaluation provides information about implementation to managers of the program, who can take action to assure program quality, and to stakeholders about the level and quality of program activity.

Typical Questions Answered by a Process Evaluation

  • Was each program activity completed as planned?
  • Was the staff qualified to conduct the program?
  • To how many people/workplaces/communities was each activity delivered?
  • How many program materials were distributed?
  • What were participants’, workplaces’ or communities’ perceptions of each activity? Of the program?
  • What were the staff members’ perceptions of each activity? Of the program?
  • What were the strengths of the way the program was implemented?
  • What were the difficulties, barriers or challenges to implementation?
  • Were all the resources needed for project activities available?
  • What was the nature of the interaction between staff and participants?

Outcome evaluation (also called summative evaluation) assesses changes in individuals, populations, or environments as a result of the program. Changes to the following outcomes are typically examined: behaviors, skills, attitudes, tobacco-related death and disability, and environmental conditions. Such outcomes have been divided into short-term, intermediate and long-term outcomes, depending on when they occur relative to the program. . Outcome evaluations are important for making major decisions about program continuation and funding because they indicate whether the program is making a difference. A baseline measure of the outcome of interest is taken before the program has begun and compared to measures at follow-up. Follow-up could be immediately after the program or at a later time. The goal of a tobacco prevention and control program is reduced tobacco-related death and disability; these long-term outcomes typically fall outside the scope of a community program evaluation due to the time frame.

Typical Questions Answered by an Outcome Evaluation

  • Was there a change in participants’ knowledge, attitude and beliefs?
  • Was there a decrease in the percentage of the priority population that smoked in the past 30 days?
  • Were there changes in the number or restrictiveness of clean air policies?

Writing Measurable Objectives

Well-written and clearly defined objectives focus the evaluation and set targets for outcomes. The objectives of the program should flow from the logic model. Each evaluation objective should be specific, measurable, achievable, and contain the following questions:

  • Who will do what?
  • How much will they do?
  • By when?

 

The evaluation plan should include objectives for the formative, process and outcome elements of the evaluation. The “how much” element of the objective should reflect what planners think is achievable, given the baseline level of the indicator within the population.

Sample Tobacco Prevention & Control Evaluation Objectives

  • Formative
    • By May 31, 2009, program developers will conduct four focus groups with high school youth to assess their perceptions of the Smoking Is Uncool media campaign
    • By May 31, 2009, trained staff will conduct 10 interviews with current members of the Your City Texas Tobacco Prevention & Control Coalition to determine the degree they are able and willing to use the clean indoor air educational materials available from CDC
  • Process
    • By June 30, 2009, the program will have served 200 members of the identified priority population
    • By June 30, 2009, program staff will have conducted 8 programs training teachers to teach the (name of evidence based program) curriculum
  • Short-Term Outcome
    • By June 30, 2008, 80% of 10th-12th grade students will indicate that it is hard to get cigarettes if they wanted some
  • Intermediate Outcome
    • By January 1, 2010, the state excise tax will increase by $2.00 per package of cigarettes Long-Term Outcome
    • By January 1, 2012, the proportion of young people who report smoking within the past 30 days will decrease by 30% from the 2008 level

Adapted from Starr, Rogers, Schooley, Porter, Wiesen, & Jamison, Key Outcome Indicators for Evaluating Comprehensive Tobacco Control Programs, 200). http://www.cdc.gov/tobacco/tobacco_control_programs/surveillance_evaluation/key_outcome/00_pdfs/Key_Indicators.pdf

Data Collection

Data collection is a central component of assessing the program’s effectiveness and making decisions about the program’s future. Data collection occurs after the goal of the program and the central questions have been identified.

Where to Get Data

The first step of data collection is to decide if new data must be collected or if existing data, often referred to as secondary or archival data can be used. Sometimes program files or other records, public or private, may contain the information. Some data, like participants’ attitudes toward the program, will come directly from program participants. Other data, like the proportion of minority youth in the community who smoke before and after the program, may be available from state surveys, such as the Texas Youth Tobacco Survey or the Texas Youth Risk Behavior Surveillance Survey. Whenever possible, it is preferable to collect data using pre-existing data sources because developing good data collection tools can be time-consuming and costly. To determine whether it is even possible to use existing data, consider each of the following:

  • Can the existing data be obtained for the coalition’s evaluation purposes?
  • How well can the existing data answer the coalition’s evaluation questions?
  • How well do the existing data represent the coalition’s priority population?
  • How available are the existing data?

 

Determine Credibility of Data

When deciding on which data source to use, consider the accuracy or credibility of the information for the priority population. Having the most accurate data sources possible is the most important factor for any successful evaluation. There are two ways to assess the accuracy of data – reliability and validity.

Data reliability is a measure of the degree to which the data can be reproduced or replicated. For example, suppose two different members of the evaluation team reviewed the congressional record to determine a legislator’s record of support for smoke-free environment legislation. If they both produced the same counts regarding instances of support, the data have reliability.

Data validity is a measure of the degree to which the data actually measure what they are intended to measure. If the records of a vote indicate that a legislator voted against the smoke-free environment legislation, and, when asked, she confirms that she voted against it, the records have been validated. Conversely, if medical records indicate that 12 percent of pregnant Latina teens smoke, but 18 percent report smoking when interviewed, the medical records data have not been validated.

Determine the Best Method for Collecting Data

After selecting the data sources, choose the data collection methods that will be the most appropriate for achieving the evaluation objectives. If existing data sources are used, the main questions will be “What information do I select from the source?” and “How do I record it for my evaluation purposes?” When collecting new data, there will be many more decisions to make. The first of these is whether to use qualitative methods, quantitative methods, or both.

Qualitative methods are open-ended and allow the evaluator unlimited scope for probing the feelings, beliefs, and impressions of the people participating in the evaluation. They also allow the evaluator to judge the intensity of people’s preference for one item or another. Such methods include individual interviews, observations, and focus groups. Results from qualitative methods cannot be generalized to other populations.

Quantitative methods are ways of gathering objective data that can be expressed in numbers, for example, a count of people or the percentage of change in a particular behavior. The results produced by quantitative methods can be used to draw conclusions about the priority population. Such methods include surveys/questionnaires administered directly to respondents in-person, through the mail, web, e-mail, or through telephone.

Select Data Collection Instruments

Whether you decide to use an existing instrument or to develop your own, the instrument you use should:

  • Include questions that can be used to measure the concepts addressed or affected by the program, for example, knowledge of tobacco prevention methods
  • Be appropriate for your participants in terms of age or developmental level, language, and ease of use
  • Be written in simple and easy-to-understand language
  • Respect and reflect the participants’ cultural backgrounds
  • Be relevant to the participants’ community and experience

 

Establish Procedures for Collecting Data

To ensure that information will be collected in a consistent and systematic manner, a set of procedures for data collection should be established. Data collectors should be trained in using these procedures.

Data Interpretation

Once the data are collected, they must be analyzed and interpreted. In some cases data analysis and interpretation can be the most time-consuming tasks of the evaluation. Before analyzing the data, it is helpful to revisit the reason for evaluating the tobacco control program in the first place. Review the objectives laid out in the evaluation plan. Each evaluation objective should be linked to the analysis that will be conducted in a data analysis plan. To avoid overlooking key questions or critical information, the data analysis plan should be written when the data collection instruments are being created.

Data Analysis Plan

The data analysis plan will include the five following data analysis steps:

  • Step 1: Getting to know the data requires a thorough review of all the pieces of data to become familiar with what you have at your disposal. For qualitative data, getting to know the data requires that you read and re-read responses, play and re-play audio and video recordings of user responses, take notes about your thoughts and impressions, decipher which responses add value and which do not. For quantitative data it means running counts or frequencies, for each response, evaluating where missing responses occur, noting categories where small numbers are present, and considering data issues such as rounding off, or use of percentages.

  • Step 2: Preparing and focusing the data is the process of sifting through all data and putting it into an organized format. This usually involves the use of computer software and is done to organize data and concentrate the evaluator’s attention on the aspects that address the evaluation questions. An important part of preparing the data is eliminating inappropriate or meaningless information, such as when a respondent chooses more than one option for an item on a survey (e.g., “strongly agree” and “agree”) or when the same option is provided for every item (“strongly agree”). Equally important are data reduction, the process of putting items together to create a scale, and data transformation, the process of recoding the data to turn them into information that answers the evaluator’s questions.

  • Step 3: Analyzing the data includes a careful review of the responses to accurately interpret their meaning. Analysis is conducted differently for qualitative and quantitative data results.

    To analyze qualitative data, have several people read the transcripts, field notes, or documents to get an overall sense of the data. Note the common themes that arise related to each evaluation question, and note whether any important themes arise that are not related to the evaluation questions. Finally, reread the material, looking for details and patterns related to each common theme. Begin by asking questions such as:
    • What patterns and common themes emerge in responses to specific questions or items?
    • How do these patterns (or lack thereof) help to illuminate the broader evaluation question(s)?
    • Are there any deviations from these patterns? If yes, are there any factors that might explain these atypical responses?
    • What interesting stories emerge from the responses? How can these stories help to illuminate the broader evaluation question(s)?
    • Do any of these patterns or findings suggest that additional data may need to be collected?
    • Do any of the evaluation questions need to be revised?
    • Do the patterns that emerge corroborate the findings of any additional qualitative analyses (e.g., document review) that have been conducted? If not, what might explain these discrepancies?
    For quantitative data, begin by describing the responding persons, organizations, or communities using frequencies for each

    demographic or other item. Be sure to give the total number for each item. This is important, especially if some people didn’t answer some questions so that the numbers differ from question to question. Consider reporting the range for each descriptive item, for example, the youngest participant was “18 years old” and the oldest was “67 years old.” For questions answered on continuous scales (for example, 1=strongly agree; 5=strongly disagree) report the mean or average for each scale.

  • Step 4: Interpreting the data involves revisiting the original evaluation questions. Once the data have been organized and carefully analyzed, it’s time to draw conclusions. To determine if the evaluation questions about your program have been answered, the evaluator will need to:

    • Put the information in perspective by comparing the results with:
      • What was expected
      • The original goals of the program
      • Common standards
    • Consider how the results can be formulated into recommendations to help staff improve the program, product or service
    • Draft some conclusions about program operations, or whether program goals were met, and use the evaluation data to support them
    • Record conclusions and recommendations in a report, and use the evaluation data to justify these conclusions or recommendations
  • Step 5: Selecting a data presentation format requires that findings be presented in a way that will clearly communicate the results. Presentation formats for qualitative and quantitative data will be different. Qualitative data are presented using sample quotes. Quantitative data can be presented using charts, graphs, and tables. Be sure each of the following has been accomplished before analysis is complete:

    • Describe the priority population and the sample that actually participated in the evaluation, and note how the two differ
    • List limitations of the data collected
    • Share the preliminary results with each of the stakeholders
    • Identify the most valid findings
    • Identify the most important findings for answering each evaluation question
    • Make sure each evaluation question was answered, and that the evaluation objective was met
    • Choose the best method of presentation for each finding
    • Discuss the “why” behind each of the findings with the stakeholders
    • Gather suggestions for “next steps” and other recommendations

Reporting the Results

Consider the audience when preparing the report. Generally there are two audiences with whom to share results: the internal audience and the external audience. The internal audience includes staff, volunteers, management, community supporters, funders, members of the priority population and other stakeholders. The external audience includes the general public, tobacco control practitioners and researchers, and funders at the local, state and national levels. The timing, purpose, and format of the report will vary with the stakeholder group.

To help clarify the goal and objectives of the report, ask the following questions:

  • Who is the audience? What are its needs and interests?
  • What does the coalition hope to get back from the audience?

 

Why Share Evaluation Findings with Different Groups?

Audience Type Reasons for Sharing Report Findings Possible Venue
Internal Audiences
Staff Provide feedback about the job they are doing Staff meetings
Volunteers Demonstrate the value of their efforts Volunteer luncheon
Management Guide decisions about program modifications Management team
Community supporters Demonstrate the value of the program Civic fair
Funders Gain continued funding One-on-one meeting
Members of the priority population Collect input on how to improve the less effective areas of the program Community health fair
External Audiences
Community
  • Raise awareness about the issue
  • Attract volunteers, funding, and in-kind resources from citizens and local agencies
Civic organizations, business groups, school boards, parent-teacher groups
State/National
  • Create a “name” for your initiative to make it competitive for seeking additional resources
  • Help tap into state and national networks of persons and agencies with similar goals
Professional conferences, church conferences, grant makers

Consider the type of report and medium. Is it best to use a technical report, an executive summary, a popular article, a news release or press conference, an oral presentation, a public meeting, a staff workshop, brochures/posters, a memorandum, or a personal discussion? When and how frequently should the evaluator report? The internal audiences, especially those involved in program implementation and management, require earlier and more frequent reporting and the degree of formality of the report would depend on the nature and purpose of the report.

More About Evaluation

The links below provide additional information for planning and implementing evaluation. The Tobacco Technical Assistance Consortium, Centers for Disease Control and Prevention and the University of Kansas Community Tool Box provide additional information on the planning and production of evaluation reports.

Developed by Loukas A, Gottlieb NH, Robertson TR & Sneden GG Department of Kinesiology and Health Education University of Texas a

  • Loading...
Last updated April 11, 2011