Principal Investigator's Guide, Chapter 5: Planning for Success: Supporting the Development of an Evaluation Plan

Would you tell me, please, which way I ought to go from here?" "That depends a good deal on where you want to get to," said the Cat. (1)

The Cat, who utters this famous line in Alice's Adventures in Wonderland, knew the key to planning a successful project: You have to know exactly what you want your project to accomplish before you can decide what you'll do to accomplish it. Then, you and your evaluator can develop a plan to determine whether your project has been successful. More specifically you need to articulate clear project goals and measurable targeted outcomes. Goals tend to be lofty and visionary; outcomes are specific and describe the changes that you expect people to undergo as they experience your project...

Tina Phillips has extensive experience in developing, managing, and evaluating informal science education projects, with a particular interest in public participation in scientific research (PPSR). She is currently the Evaluation Program Manager at the Cornell Lab of Ornithology, where she is leading an NSF-funded project called DEVISE that is committed to building evaluation capacity within the PPSR field. As part of this effort, she is working collaboratively with evaluators and practitioners to provide guiding frameworks and contextually appropriate instruments for evaluating individual learning outcomes. She has written many articles on evaluation and was one of the authors of a landmark CAISE report: Public Participation in Scientific Research: Defining the Field and Assessing its Potential for Informal Science Education. Additional areas of concentration include formative and summative evaluations of machine learning experiences, website usability testing, and emerging research on understanding socio-ecological outcomes of PPSR. Tina holds a Master's in Education from Cornell University and is currently a PhD candidate at Cornell examining the relationship between citizen scientists' participation and outcomes related to knowledge, skills, and behavior.


With clearly articulated goals and outcomes in hand, your evaluator can develop a plan that serves as the roadmap for project evaluation and provides a window into the evaluation process. The plan should provide information about the purpose and context of the evaluation, who will be involved, and how evaluation data will be collected and reported. The plan also should include evaluation questions that align to the goals and outcomes and frame the entire evaluation. Finally, the plan should include a detailed timeline, budget, reporting strategies, and other logistical considerations such as the means for obtaining Institutional Review Board (IRB) approval for working with human subjects.


Rick Bonney is the director of program development and evaluation at the Cornell Lab of Ornithology, where he has worked since 1983. Some people think he was born there. He is co-founder of the Lab's citizen science program, and since 1991 has been PI, co-PI, consultant, advisor, or evaluator on more than 40 projects funded by the National Science Foundation. As a result he has extensive experience in developing partnerships between practitioners and evaluators to design and execute evaluation plans and disseminate their findings. Rick has been deeply involved in CAISE since its inception and was lead of the CAISE inquiry group that produced the report Public Participation in Scientific Research: Defining the Field. He is also on the board of directors of the Visitor Studies Association and is co-chair of VSA's communications committee. Rick received his BS and MPS degrees from Cornell University's natural resources department.


While the evaluation plan should be comprehensive it also needs to be flexible, so that it can reflect changes in project needs or circumstances as project development gets under way. For example, an evaluation plan may change owing to logistical hurdles such as access to participants or budget limitations; new project directions; or to explore unexpected outcomes as they emerge. Consider the evaluation plan to be a working document that you and your evaluator share and which evolves as stakeholders offer their perspectives and insights on the developing study (Diamond 2009).

The remainder of this section will guide you along the evaluation highway.

Key elements of an evaluation plan

Background information

  • Project overview, intended audience, and stakeholders
  • General information for your evaluator

Project goals and outcomes

  • Logic model, theory of change, or other description of outcomes

Evaluation Questions

  • Identification of what is to be evaluated
  • Evaluation questions (refined and prioritized)

Indicators of success

  • Measurable indicators of success
  • Links between goals, outcomes, and indicators

Methodology

  • Design strategy
  • Data collection strategy
  • Data analysis strategy
  • Interpretation strategy
  • Reporting strategy

Logistics

  • Timeline
  • Budgets
  • IRB approval

Project Overview, Intended Audience, and Stakeholders

Right from the start, your evaluator will seek to obtain as much information as possible about your project. He/she will want to know more than what is presented on a website or informational brochure. You'll need to provide information about your project's overall goals, intended audience, and project staff and partners. You'll want to describe the development and implementation plan for your project along with its targeted outcomes and deliverables. And, you'll need to describe all ‘stakeholders,’ —the people and/or institutions that will be interested in the evaluation process and results. These include funders, collaborators, program participants, administrators, and policy makers.

Information to share with your evaluator

Providing your evaluator with previous evaluations or reports about related projects will be invaluable in helping him/her understand your audience. If no prior evaluation reports are available, provide your evaluator with whatever demographic data you have about your target audience. You also may wish to include the organizational, cultural, and historical context for your project. For instance, it's helpful to share information about how the project and team like to work, your organizational structures and expectations, and any contextual information that may influence the evaluation design.

Finally, if your project operates under an existing program theory, be sure to share that with your evaluator. Articulating program theory can be done both formally and informally. For example, if your project operates in afterschool settings, you can find plenty of literature that describes the research in this area, which may provide a guiding theoretical framework for how these types of projects are intended to succeed. More often, project staff simply provide information to the evaluator about what the program is actually supposed to do and how it is supposed to do it. Either way is fine for describing how the program works, and the more information that you can provide to your evaluator at the start, the more efficient he/she can be with her time, allowing her to focus on developing a comprehensive evaluation plan.

Once the necessary information has been obtained, your evaluator should feel comfortable and well versed with your project and its intended audience. His/her understanding of the project should be evidenced in what is written as the background or overview of the evaluation plan. If you sense misunderstanding about your project, sort it out at the beginning!

Project goals and outcomes

Most likely you developed goals and outcomes for your project while preparing a project proposal or development plan. In an ideal world, your intended goals are achievable and your targeted outcomes are specific, measurable, and relevant to your project participants. As you begin working with your evaluator to develop an evaluation plan, however, you may discover that your goals were a bit too ambitious or that your targeted outcomes were vague. In short, your goals and outcomes may fall under the technical term of ‘fuzzy’ (Patton 2008). If so, your evaluator's knowledge and experience can help you refine your goals and clearly articulate your outcomes. For example, your evaluator will examine whether each of your outcomes identifies the intended audience(s) and how the audience is expected to change (e.g., increasing knowledge, developing a more positive attitude).

During this process you can expect your evaluator to ask you exactly what you are attempting to achieve with your project. Probing questions are not meant to make you feel uncomfortable; they are intended to help your evaluator clarify your goals and objectives because learning whether they are being met is what the evaluation process is all about. And unlike other aspects of your project, which can change and adapt, changing your goals and objectives halfway through a project can mean starting all over with a new evaluation plan. Your evaluator also will check that all stakeholders agree on project goals and objectives, and if different stakeholders have different goals, the evaluator will set priorities or look for agreement that multiple goals will be evaluated.

Seeing how goals and objectives fit into project development can be challenging. Experienced evaluators are skilled at visualizing complexity, and their expertise will help you see the big picture and graphically present what you hope to provide and how your program will achieve its intended outcomes. Evaluators have many tools for visualizing complexity; below we discuss two that are widely used.

Logic models

A logic model is a visual depiction, often presented in matrix or mind-map form, of how a project works. You can think of a logic model as a graphical representation of your program theory. Logic models link outcomes (both short- and long-term) with project activities/processes and the theoretical assumptions and principles of the project (W.K. Kellogg Foundation 2004). Logic models also help evaluators focus their study on the most critical project elements (National Science Foundation 2010).

Logic models should be developed collaboratively between the project team and the evaluator. This process will contribute to a unified project vision including shared understanding of project goals, agreement on targeted project outcomes, and expectations about when those outcomes will occur.

While the PI leads decision-making for the logic model's content, evaluators often bring skills in facilitating and supporting the development process. They can help to distinguish and refine elements of the model and make sure that the full extent of potential outcomes are considered. The evaluator can also identify any outcomes depicted by the logic model that cannot be easily or reliably measured (this may lead to a discussion of whether those outcomes should still be included in the model).

While logic models should be developed early in project development, they are not static tools. As projects evolve to reflect changes in underlying assumptions and theory, logic models must be updated to reflect this thinking (National Science Foundation 2010). Some projects create a poster of their logic model and then, as the work progresses, use sticky notes to update and, ‘check off‚’ tasks within the model. Or they create an online tool that all team members can access, discuss, and modify. Logic models come in many types and formats and no single strategy is ‘best,’ for creating them. Often, however, a logic model is portrayed in graphic form with the following key elements: Inputs, activities, outputs, outcomes, and impacts.

Inputs

Inputs are resources that are made available to the project. They include funding sources, staff time, volunteer/user interest, and project or technological infrastructure.

Activities

Activities refer to things that the project will develop, conduct, or make available for use by the intended audience. They can be broken down into activities conducted by project staff and those done by the intended audience. Staff activities could include providing training workshops, creating educational materials, recruiting partner organizations, or developing exhibits. Participant activities might include attending trainings or events, visiting a web site, collecting and submitting data, and communicating with others.

Outputs

Outputs are the direct products or services of the activities and typically are easy to quantify, for example, the number of training workshops that staff deliver, the number of people that participate in a project, or the number of web pages that a project produces.

Outcomes

Outcomes are the changes to individuals, groups, or communities as a result of project participation or experience. Outcomes are often described as short-term, occurring within a few years of the activity; medium-term, happening within 4-7 years after the activity; or long-term, happening many years after an activity has commenced (W.K. Kellogg Foundation 1998).

Impacts

Impacts are essentially long-term outcomes. They tend to be broad in scope and provide expanding knowledge or capacity for a particular segment of society. While desired impacts are often presented in logic models they are rarely measured because of their inherent complexity and because their timeframe usually lasts past the time of project funding.

Sample logic model

This sample logic model was adapted from a citizen science project:

Theory of Change

Some evaluators may ask you to articulate your ‘Theory of Change’ i.e., how you think each of your project activities will lead to your desired outcomes. A theory of change does not have to be based on documented theories but can be based on your prior experiences, assumptions, expert knowledge, or even wishful thinking. Once you make your theory of change explicit you need to communicate it to other members of your team and, in turn, have them share how they think the project activities will lead to desired outcomes.

Once your team's assumptions are made explicit, you can begin to test them by creating statements that link your activities with short, medium, and long-term outcomes. A theory of change will describe the strategy or set of actions to be implemented by the project as well as the desired outcome from those activities. The easiest way to do this is by using‚‘if...then‚’ statements. For example, let's say that you are implementing an afterschool program aimed at increasing interest in science careers. For this outcome, begin by listing your assumptions: We assume that exposing kids to science will increase their interest in science careers. Then describe the activities as they relate to the outcomes with ‘if...then‚’statements. You may find that you need to provide additional activities or supports to reach the outcome.

EX 1: If we provide fun, compelling science related activities, then we will increase interest in science careers.

Are there holes in example 1? Are there assumptions that need to be addressed? Could it be improved? Let's try another one...

EX 2: If we provide science-based activities, and describe how they relate to science careers, then students in the afterschool program will have knowledge of some different science careers. If students know about different science careers, then they may seek out additional information about a particular career. If they seek out more information on a career, then they may show increased interest in pursuing a science career.

The set of statements in example 2 makes it much more clear how the activities are linked to the desired outcomes. As project developers we are often too embedded in programs to see and identify assumptions about audience needs and interests or to envision the explicit mechanisms that must be in place through project activities to influence change. Working with your evaluator to develop logical "if...then" statements can help uncover and address these assumptions so that activities and outcomes are aligned.

Sample Theory of Change

A theory of change can also be depicted graphically as a "results chain," as demonstrated here:

Whether you and your evaluator develop a logic model, theory of change, or some other representation of your project, remember that you are the driver and primary decision maker for setting the project direction. Your evaluator complements and supports your project. In other words, it is the PI's job to decide what a project should do and the evaluator's job to determine whether the project did it. That said, evaluators often support PIs and project teams by facilitating the process of focusing project goals, intended audiences and outcomes, and by providing expertise in clarifying ideas to build cohesive project designs and conceptual frameworks to drive the evaluation forward.

Common pitfalls when developing goals, outcomes, and indicators

It is not enough to develop a program and then assume that participants will achieve the outcomes that you intend for them to achieve. Below are pitfalls that we often see in program development:

  • ‘Wishy-washy’ outcomes: outcomes that are not specific, not measurable, and not relevant to the project.
  • Targeted outcomes not aligned to project activities: for instance you say that you want your project participants to increase their data interpretation skills, but your project does not actively support data interpretation as an activity.
  • Expecting too much: You want your project to have far reaching and lasting impacts, but the truth is that your resources are limited. You need to be realistic about what your project can actually influence.

Evaluation questions

Your evaluation questions form the backbone of your design strategy and everything that follows. It is helpful to begin by clarifying what you intend to evaluate and understanding what will not be evaluated. Next, you will generate questions that can be answered during front-end, formative, and/or summative evaluation. We have included sample questions in each of those broad categories as thought-starters. And finally, as you refine your set of questions, you will want to shape them and prioritize them according to a variety of criteria described below.

Identifying what is to be evaluated

With goals, outcomes, and a logic model for your project in place, the next step is to explicitly articulate the main reason or reasons for your project evaluation, the specific aspects of the project that will be evaluated, and the specific audience for the activities or products that will be evaluated. The phases of evaluation discussed earlier in this guide can be used to frame the evaluation plan:

Front end:

  • Determine audience needs and interests
  • Acquire contextual information about the political, social, and cultural environment of a particular program

Formative:

  • Monitor a project on an ongoing basis through regular data collection
  • Describe how a project functions
  • Provide recommendations to improve project functionality
  • Clarify program purpose or theory

Summative:

  • Gauge whether targeted outcomes have been achieved
  • Summarize learning from the evaluation and any unintended effects that were documented
  • Identify project strengths and weaknesses
  • Determine overall value or worth of a project
  • Determine cause and effect relationships between an intervention and outcomes

Additional goals of evaluation can include:

  • Obtain additional funding or support
  • Increase organizational evaluation capacity building
  • Compare outcomes across projects
  • Conduct a cost-benefit analysis between project costs and outcomes

Just as important as identifying what will be evaluated is deciding what will not be evaluated. Defining boundaries for the evaluation as the project begins-whether such boundaries are specific audiences, time frames, locations, or individual project elements-will minimize any surprises later in the process. Too often PIs arrive at the end of project development and wonder why something was not evaluated simply because the boundaries of the evaluation were not explicitly discussed. Avoiding this problem is easy if you take responsibility for communicating boundaries to your evaluator as the evaluation plan is developed. You'll also want to check with your evaluator to see whether he or she foresees any constraints that might affect the overall evaluation.

Developing evaluation questions

The next step in developing the evaluation plan is to frame appropriate evaluation questions within the context of desired outcomes and the purpose of your evaluation. Evaluation questions should be broad enough to frame the overall evaluation yet specific enough to focus it. Articulating well-formed questions (those that frame the overall study, not questions that might be asked of participants) will help your evaluator determine the overall study design and approach and selection of methods (Diamond 2009). You and your evaluator can work together toward developing questions that will address what you need to know to determine if you are reaching your desired outcomes. Answers to the evaluation questions must be relevant, meaningful, evidence-based, and useful to the project stakeholders.

Sample evaluation questions

For example, a front-end evaluation interested in better understanding a project's audience might ask the following types of questions:

  • What does our audience already know about this particular topic?
  • What misconceptions exist among our audience regarding this topic?
  • How interested is the intended audience in this new emerging topic?

Formative evaluation questions, which focus on understanding the extent to which a project is functioning as expected, may ask:

  • What, if any, were the barriers to participation?
  • Were project participants satisfied with their experience? Why or why not?
  • What lessons were learned about developing and implementing the project?
  • Were participants engaging in activities as planned? Why or why not?

Summative evaluations, where the emphasis is on determining if projects have met their goals, may ask the following questions:

  • Was there evidence of an increase or change in knowledge as a result of interacting with this exhibit? For which participants, and what level?
  • Did participants improve their skills in data interpretation after participating in the project?
  • Was there evidence that participants changed aspects of their consumer behavior as a result of viewing this television program?
  • What was the value, if any, of participation in this project for the intended audience?

Qualities of effective evaluation questions

You will likely come up with a large number of questions for which you would like answers, but remember that not all questions can be answered given the allotted time and resources, and not all questions will have the same importance to all stakeholders. Also, multiple data sources can be used to answer individual evaluation questions; similarly, single data sources can contribute to answering multiple evaluation questions.

Your evaluator will work with you to ensure that your evaluation questions are 1) answerable; 2) appropriate for the various stages of evaluation; 3) aligned to the desired outcomes; and 4) address stakeholders' information needs. In addition to these criteria, your evaluator also will help you prioritize the questions that are most critical to address by considering the following aspects:

  • The resources needed to answer the question
  • The time required
  • The value of the information in informing the evaluation purpose

As each question is examined through the lens of these criteria, some will present themselves as high priority while others will be eliminated altogether. At the end of this process you should feel comfortable knowing that the questions you focus on will demonstrate measurability, relevance, and feasibility, while setting the stage for the rest of the evaluation roadmap.

Developing Indicators of Success

Now you have project goals, outcomes, a logic model, clearly expressed reasons for conducting your evaluation, and clearly articulated evaluation questions. The next task that you and your evaluator will tackle is developing indicators, which are criteria for measuring the extent to which your targeted outcomes are being achieved. Effective indicators align directly to outcomes and are clear, measurable, unbiased, and sensitive to change. For instance, if an outcome relates to knowledge gains, the indicator should measure knowledge gains as opposed to, say, participant interest. An indicator answers the question: How will you know it when you see it? And while indicators are measurable, they do not always need to be quantifiable. Indicators can also be qualitative and descriptive.

Identifying realistic, feasible, and valid indicators is probably the most difficult step in designing an evaluation. The constraints of time, funding, and reach can restrict the types of data that you can collect. At the same time, the easiest things to document may not always be the most salient or compelling issues. Sometimes data are not feasible or available for certain indicators; in this case lack of data sources should be acknowledged in the evaluation plan as a limitation of the study.

Links between goals, outcomes, questions, and indicators

Template for articulating goals, outcomes, and indicators

In the template for articulating goals, outcomes, and indicators provided below, you will note that for each goal we provide space for developing several outcomes and indicators. There is no ‘correct’ number of outcomes or indicators, and each project will vary in the number that it attempts to achieve and measure. Working through this worksheet will be an extremely valuable exercise in developing a project and its associated evaluation plan. And if you include an outcomes development sheet as part of a grant proposal, you'll help readers better understand the chain of effects that you're hoping will result from your project.

Evaluation Methodology: Matching the Study Design to your Questions

Design Strategy

As we continue our journey down the evaluation highway we arrive at a critical juncture: What strategy will we use to design the overall evaluation? The answer should reflect the types of questions you need answered, the reason for conducting the evaluation, the methods that best address the evaluation questions, the amount of resources you can commit to the evaluation, and the information that project stakeholders hope to learn.

Many different evaluation approaches and study designs exist, and it is beyond the scope of this guide to describe them all. Different study designs are better suited for different types of evaluation questions. If your question is concerned with comparing outcomes for participants directly before and after project participation, then pre-post designs will likely fit the bill. Questions that seek to answer causal processes where you can include control groups and random assignment are best suited for experimental designs. Many evaluators will combine these approaches to achieve mixed-methods designs, and will incorporate both quantitative and qualitative techniques as a method of enhancing the strength of various data collections methods and increasing the validity of results through a triangulation of findings (Creswell 2003). For example, if one of your questions is best answered by broad representation of a population and data are easy to acquire through questionnaires, then quantitative survey methods work very well. If one of your questions requires gathering robust information on participant experiences and you can gain easy access to participants, then qualitative interview or focus group methods are appropriate.

Data Collection Strategy

A common pitfall in designing evaluation studies is the instinct to start by identifying preferred methods; for example, "What I want is a series of focus groups conducted with youth in the science afterschool program" (Diamond 2009). Discussion of data collection methods should come only after your goals, targeted outcomes, evaluation questions, indicators and study design have been clarified and agreed upon. Then, for each indicator, you and your evaluator will need to determine:

  1. Who is the intended audience and what specific information do you hope to get from its members? (This discussion should be led by the PI.)
  2. What method of data collection is best suited for obtaining the information that you need from this audience? (This discussion should be led by the evaluator.)
  3. When will the information be collected and by whom? (This discussion should be led by the evaluator with input from the PI.)

The possibilities for data-collection strategies are nearly endless. In choosing methods your evaluator will consider issues such as the potential trade-offs in collecting rich, in-depth qualitative information versus information that has a high level of statistical precision, the need to collect standardized data, the cultural attributes of the audience, and the availability of contact information for the sample. These issues will also help your evaluator determine the population to sample and the appropriate sample size.

Sample data collection strategy

Data analysis strategy

Data analysis involves the process of examining, cleaning, and transforming data so that conclusions can be reached about whether targeted outcomes were realized. Data analysis can take many different forms and relies on different methodologies depending on the project need, audience, how the information will be used, and your evaluator's expertise. If the evaluation is going to rely heavily on qualitative data, i.e., data derived from text or images, then data reduction will be required to transform lengthy documents into succinct, useful information (usually in the form of common themes or categories). If the evaluation is going to be primarily quantitative, i.e., collecting various numbers or scores, your evaluator will need to use statistical methods to transform the data into charts, graphs, and tables that assign meaning to all the numbers and provide comprehensible information. Your evaluator may be skilled in analyzing both qualitative and qualitative data, thereby leveraging the strengths from both of these methodological approaches.

Before getting to the data analysis phase it is critical that you understand and are comfortable with the approach that your evaluator will use for collecting data, as this will most certainly shape the way in which he or she analyzes data. Regardless of the approach used, you should feel comfortable asking about the overall quality of the data set and the measurements used, and if the appropriate data were collected in order to answer the evaluation questions.

Data interpretation strategy

Evaluation is both an art and a science, and nowhere is that more evident than in the data interpretation phase. Just as no two people will interpret a painting in the same exact way, no two evaluators will interpret data (either quantitative or qualitative) in exactly the same way.

Your evaluator should have expertise in interpreting the kind of data that you plan to obtain through your evaluation and should be able to explain how the interpretation will describe outcomes that were and were not realized and why. Data interpretation also should help to clarify whether limitations of the study design, data collection process, or other circumstances contributed to the results. In some cases, unintended outcomes and how these could be incorporated into future project improvements may be revealed. Some evaluators may also plan to compare results from your project with those from similar programs. Evaluators may also plan to reflect on project outcomes, the broader context of the project, and future research possibilities. If these are issues that you would like to have included in the data interpretation be sure to spell them out!

Data reporting strategy

Once data have been analyzed and synthesized, your evaluator will need to write an evaluation report. The report may be the most tangible product of your evaluation process and will be shared with all stakeholders interested in your project impacts. This phase of project evaluation is so important, and holds so many possibilities, that we have included an entire chapter of this guide on the subject (see Chapter 6).

In developing the evaluation plan your evaluator should describe not only what will be in the report but also how and when the information will be shared. For example, some evaluators provide continuous feedback about data being collected through interim reports or via regular meetings. Other evaluators prefer to wait until data collection is complete before analyzing or sharing information with you. Make sure that you are comfortable with the reporting strategy described in the plan.

Logistics

Your evaluator can assist you in laying out a budget and timeline for your evaluation design and ensuring that it meets requirements for Institutional Review Board (IRB) approval. It is helpful to maintain an open dialogue with your evaluator about the costs and time frames associated with different aspects of your evaluation study in order to shape a design strategy that is aligned with your budget and schedule.

Timeline

The evaluation plan should include a timeline that provides anticipated start and end dates for completing key tasks and meeting established milestones.

Timelines are often presented in calendar format:

Be sure that the timeline seems reasonable given what you know about your project and its audience. For example, if the evaluator is conducting formative usability testing of a web-based application that your staff will develop, does the timeline align with your team's development schedule? If the evaluator plans to collect data for summative evaluation through a survey of participants, does the timeline allow sufficient time to recruit willing respondents? While timelines often change, starting with one that seems realistic will help to avoid later frustrations.

Budgets

The evaluation plan also needs to provide a budget. Complete evaluations typically make up about 10 percent of an overall project budget, but this figure can vary greatly depending on the evaluation complexity. For example, evaluations that incorporate experimental designs with control groups are generally more costly than those that rely on pre-post project surveys with no control groups. Considering qualitative designs, interviewing, transcribing, and analyzing information from 50 people for 60-90 minutes each can also be very time intensive and thus expensive. Recruiting participants can be costly depending on how you intend to engage them, particularly if incentives are required to ensure their participation. In discussing the plan with your evaluator, he/she will give you a sense of what is feasible at different cost levels, and together you can develop a budget that is appropriate for the project.

Institutional Review Board approval

Most institutional review boards (IRBs) require you to submit a detailed description of your project, your audience, the methods you will use, any surveys, observation guides, interview guides, or other instruments you intend to use, and how you intend to recruit people into your study. They will also want to see a copy of a consent form as well as a description of how you will minimize risk to your participants and ensure their confidentiality. Typically, independent evaluators do not have direct access to an IRB and must rely on a college or university IRB to acquire approval. Be sure to check with your organization to determine what is required so that together you and your evaluator can complete the necessary training and submit the required documents well ahead of implementing your evaluation.

Conclusion

When the evaluation plan is complete, it will be up to you to make sure that it will meet your project needs. You may need to go back and forth with your evaluator a few times-indeed, constructing an evaluation plan that is relevant, feasible, and effective requires regular and iterative communication between the project team and evaluator. Remember that you are driving this process and it is up to you to make sure that the evaluation serves your project's long-term interests and helps answer questions that will guide your future planning and management goals. And, as you're learning about evaluation, your evaluator is learning about your project and your organization. There's a lot for you both to learn through this process, so clear communication, patience, flexibility, and a good sense of humor are all necessary elements in developing a strong and collaborative evaluation plan.