Principal Investigator's Guide, Chapter 2: Definitions and Principles: Guiding Ideas for Principal Investigators to Know

Lauren Russell specializes in leading coalitions of stakeholders toward envisioning and implementing informal science education projects. Currently director of grants and strategic partnerships at the Oregon Museum of Science and Industry, she has nearly a decade of experience working in science centers where she has worn many hats: Project manager; educator; external evaluator; professional development facilitator; and fundraiser. Lauren's belief in the value of partnerships between museums and their local scientific communities focuses much of her work on engaging the public with current research. Prior to joining OMSI, she led the award-winning Portal to the Public initiative at Seattle's Pacific Science Center. With funding from the National Science Foundation, this effort brings together scientists and science center visitors in personal, activity-based learning experiences-a framework now being implemented at science centers across the country. Lauren values cultivating effective working relationships among evaluators and practitioners. With an eye to both project success and advancing the field, these relationships support the shared ability of teams to complete evaluations that are both meaningful and useful to all stakeholders.


Definitions and Principles: Guiding Ideas for Principal Investigators to Know

This chapter presents ideas to help you incorporate evaluation into your work. We include definitions, explanations, and principles that guide the use of evaluation in the process of designing, testing, and refining a project and then understanding its outcomes for visitors and participants. We'll define evaluation within the context of informal Science, Technology, Education, and Math (STEM) education, describe the purposes of evaluation, and discuss the challenges and opportunities that evaluation offers to practitioners of informal STEM education.

Evaluation plays an important role throughout your project. In the planning stages, an evaluator can help you clearly define your targeted outcomes and their connections to your project activities. As you implement your project, evaluation can reveal needed course corrections. At the conclusion of your project, evaluators can help you understand its outcomes.

Evaluators use a variety of data collection methods such as observations, interviews, focus groups, and surveys. These methods yield qualitative and quantitative data that can be used to make recommendations for project improvement, to assess project effectiveness, and/or to answer a wide range of project-specific questions.

The concrete end product of an evaluation is usually a formal report (or reports) that includes project background, study design and methods, data gathered from visitors and participants, and key findings and recommendations. You can view numerous examples of evaluation reports for ISE projects on InformalScience.org.

Defining Evaluation

Evaluation has many definitions-you need to make it work for your project! Here are some formal definitions, along with more background on what questions evaluation can answer and a description of the differences between evaluation and research.

Evaluation in the Informal Science Education (ISE) field & other fields

The term "evaluation" encompasses a broad range of activities and purposes, so it's no wonder that ISE practitioners are often challenged to pin down a precise definition. Evaluation takes many forms-front-end, formative, and summative evaluation might be familiar terms-and each has a wide range of purposes and benefits. Surrounded by Science, a National Research Council report focused on learning science in informal environments, defines evaluation as a set of approaches and techniques used to make judgments about the effectiveness or quality of a program, approach, or treatment; improve its effectiveness; and inform decisions about its design, development, and implementation (National Research Council 2010). In other words, evaluation for an ISE project generally provides information that can guide a project, suggest how it might be improved, and in the end provide evidence to demonstrate whether it worked as intended.

Evaluation became a prevalent practice and growing academic field in the 20th century when it was used on a widespread basis to assess social programs in education and public health (Rossi 2004). Evaluations are now used in an array of contexts to assess diverse programs in education, social services, organizational development, and public policy initiatives. Many journals and professional organizations are devoted to the broad field of evaluation, including the American Evaluation Association(www.eval.org), which publishes the journals The American Journal of Evaluation and New Directions for Evaluation.

Looking beyond the field of informal STEM education, evaluation can be defined as the use of social research methods to systematically investigate the effectiveness, value, merit, worth, significance, or quality of a program, product, person, policy, proposal, or plan (adapted from Fournier 2005, Rossi 2004). The Encyclopedia of Evaluation further explains that conclusions made in evaluations encompass both an empirical aspect (that something is the case) and a normative aspect (judgments about the value of something) (Fournier 2005). The normative aspect explains why recommendations are often included in evaluation reports.

Evaluation answers three questions: what? so what? now what?

A common pitfall when designing evaluations is the instinct to start by identifying preferred evaluation methods, for example, "What I want is a series of focus groups conducted with youth in the science afterschool program" (Diamond 2009). Evaluation planning should begin not by choosing methods but by defining questions that frame what you want to know from the overall study (not questions that might be asked of participants) (Diamond 2009). Then your evaluation questions can guide the choice of data collection methods. Michael Quinn Patton, in his Utilization-Focused Evaluation (2008 p.5) states that in the simplest terms, evaluation answers three questions: What? So what? Now what?

What: What happens in the program? What services and experiences does the program offer? What activities and processes occur? What outcomes and impacts result? What unanticipated outcomes emerge? What are the program's documented costs and benefits?

So what: What do the findings mean? Why did the results turn out as they did? What are the implications of the findings? What judgments can be made? To what degree and in what ways can the program be considered a success? A failure? A mixed bag of positives and negatives? How does this program compare to other programs? What sense can we make of the findings?

Now what: What recommendations can be made from the findings? What improvements should be made? Should funding be continued, expanded, reduced, or ended? Should others adopt the program? What do findings from this project suggest for other or future projects? In short, what actions flow from the findings and their interpretations?

The Difference Between Evaluation and Research

Let's discuss an important question-what is the difference between evaluation and research? Many practitioners are confused by this question because research and evaluation share many of the same methods for collecting and analyzing data, and many professionals lead both research and evaluation studies.

However, the purposes and the units of primary interest for research and evaluation are usually different. Much of educational research is designed to study a characteristic of learning grounded in an academic discipline such as psychology or sociology, or to study a particular theoretical framework. Research traditionally is geared toward knowledge generation and usually includes dissemination of findings through publication in peer-reviewed journals.

In contrast, the primary purpose of evaluation is to assess or improve the merit, worth, value, or effectiveness of an individual program or project and to advance the field (in this case, informal STEM education) by deriving lessons for funders, policymakers, or practitioners. Evaluation studies are generally conducted for clients and in collaboration with various stakeholders who are invested in improving or assessing a particular intervention, event, program, or activity.

The complementary roles of evaluators and discipline-based researchers

Learning researchers and evaluators are using complementary methods to study the Life on Earth exhibit, a multi-user touch-table featuring an interactive visualization.

This interactive visualization of data from the Tree of Life (a web-based hierarchy of phylogenies representing over 90,000 nodes in the evolutionary tree) was developed by Harvard University in partnership with Northwestern University, University of Michigan, and the University of Nebraska State Museum in order to study strategies for engaging museum visitors in exploring the relatedness of all known species.

Previous research has shown that museum visitors initially have reasoning patterns that reflect a combination of intuitive reasoning about how life changes with some evolutionary knowledge and religious reasoning. Results from research studies with Explore Evolution indicate that a single visit to the exhibition can help visitors significantly shift their reasoning patterns to include more evolutionary reasoning. Moreover, visitors appear to do so in a predictable learning trajectory. Preliminary results from the Life on Earth exhibit component suggest similar findings.

The Life on Earth research team is investigating whether the experience of interacting with the multi-touch exhibit moves visitors along a gradient toward using evolutionary explanations more often. The team's discipline-based researchers focus on specific types of learners (pairs of youth aged 9-14), and they use comparison studies of groups randomly assigned to different conditions: for example, one condition involves using the multi-touch exhibit while another involves viewing a video about the Tree of Life. In contrast, the evaluation team uses a more naturalistic approach to assess the impact of the exhibit on visitors' behavior and attitudes. The evaluators examine how visitors use the exhibit as designed and implemented to see what people do and say when the exhibit is installed in a museum environment. The evaluation findings thus help the team understand how a range of people use and interact with the Life on Earth exhibit, providing context for the researchers' findings.

Stakeholders

Even within informal STEM education, evaluation has many stakeholders.

Many stakeholders benefit from evaluation, including project developers, project participants and their communities, and project funders.The primary stakeholder is often the project team. Evaluation can help a team build a reflective practice throughout project development, understand what audience impacts are occurring, strategically improve a project, and plan for future work.

Project participants and their communities are stakeholders because they are typically the project's direct beneficiaries. Evaluation findings often describe participant experiences and may inform future services and programs that will be available to them. The American Evaluation Association's "Guiding Principles for Evaluators" explains that evaluators must "articulate and take into account the diversity of general and public interests and values that may be related to the evaluation" (AEA 2012).

Funders such as the National Science Foundation are also key stakeholders in evaluation of the projects they help to bring about. Funders recognize the value of integrating evaluation into project development for the benefit of all stakeholders. Evaluations also help funders understand and describe the impact of their project portfolios and inform strategic decisions about investments (Friedman 2008).

In some cases stakeholders may hold conflicting opinions regarding the purpose of an evaluation. For example, on-the-ground practitioners may be most interested in learning how to make a program better (improvement-oriented formative evaluation), while funders may prioritize summative or accountability-focused evaluations. Therefore, stakeholders and evaluators must have open conversations to agree on the goals and intended purposes for a project's evaluation. Then the evaluator will be able to determine the best approaches and methods to carry out one or more studies. Sometimes data collected from participants can be used for multiple evaluation purposes.

Three Main Types of Evaluation

Front-end evaluation

Information-seeking front-end evaluation focuses on gathering information that informs project planning and development (Diamond 2009). Front-end evaluation often takes the form of audience research as it gathers data about the knowledge, interests, and experiences of the intended audience.

Formative Evaluation

Improvement-oriented formative evaluation focuses on learning how to improve or enhance a project (Patton 2012). Formative evaluation gathers data about a project's strengths and weaknesses with the expectation that both will be found and that the information can be used to make improvements.

Summative Evaluation

Judgment-oriented summative evaluation focuses on determining a program's overall effectiveness and value (Patton 2012). Summative evaluation is particularly important in making decisions about continuing, replicating, or terminating a project, or providing lessons learned about informal STEM education for the broader field. Summative evaluations are often requested or required by funders, including the National Science Foundation.

Qualities of Informal STEM environments

Principal Investigators, evaluators, and project teams need to understand and consider the special attributes of informal STEM education when planning and implementing evaluation studies. Understanding this context helps a team design evaluations that leverage the strengths of ISE settings and set reasonable and realistic goals and expectations. Key attributes of informal learning environments and experiences that present opportunities and challenges for evaluation include complexity, social experience, variety, and the fact that informal STEM education is an emerging field.

Complexity: Informal learning environments and experiences are complex.

Many informal STEM experiences are short, isolated, free choice, and self-directed. Often they reach target heterogeneous public audiences whose members come to the project with unique prior knowledge, interests, and experiences, and individual audience members learn different things, not just different amounts (Friedman 2008).

Challenges:

  • Separating the effects of a single experience from a variety of other factors that could contribute to positive learning outcomes can be challenging (National Research Council 2009, 2010). This is true with many education interventions but particularly so with informal learning environments.
  • Establishing uniform evaluation activities, approaches, and methods that do not sacrifice a participant's freedom of choice and spontaneity can be difficult (National Research Council 2009, 2010).
  • Experimental designs, where participants are studied in both treatment and control groups, may not be practical or the most appropriate method for evaluating many ISE projects. Therefore, conclusively attributing specific outcomes to a set of specific experiences or interventions is a difficult, often inappropriate, task for evaluation in the ISE context (Friedman 2008).

Opportunities:

  • ISE environments allow evaluators and practitioners to consider a wide range of potential outcomes, including some that may be unanticipated during project design.
  • Given that ISE experiences are learner driven, evaluations in ISE environments can be designed to be learner centered.
  • Because of the complexity of ISE settings, evaluators must respond creatively and flexibly with new instruments, methods, and approaches, which can advance the field of evaluation as a whole.

Social Experience: Many informal learning experiences are collaborative and social.

"Doing well" in informal settings often means acting in concert with others (National Research Council 2010). Participants may be motivated to engage in ISE with the primary goal of having a social experience, considering learning goals secondarily or not at all.

Challenge:

  • Teasing apart individual assessment from group process and accomplishments, especially in light of unanticipated outcomes, can be difficult (National Research Council 2009, 2010).

Opportunity:

  • Evaluation in the ISE context helps us better understand socially mediated experiences across family and multi-age groups. These insights add richness and depth to our understanding of how people learn through interaction and conversation, which subsequently helps us to design experiences that better support social interaction.

Variety: Informal STEM Education environments and experiences are exceptionally diverse.

Intended audiences, settings, delivery methods, depth, expected outcomes, and other dimensions vary, and experiences may include exhibits in museum environments, television and radio programs, casual investigations at home, or afterschool programs.

Challenges:

  • Participants may or may not be able to articulate personal changes in skill, attitude, behavior, or other outcomes at any stage of an informal learning experience. Therefore, evaluators may need to design instruments or other evaluation techniques that do not require or solely depend on self-articulation (Allen in Friedman, 2008).
  • Connecting the dots between various evaluations to make generalizations about learning or best practices is complicated because of multiple unique contextual factors.

Opportunities:

  • Many ISE environments allow for nimble and flexible evaluation settings. Especially at museums, visitors are abundant, and most are willing study participants.
  • Because of the diverse contexts that surround informal learning, ISE evaluation is well positioned to draw on and contribute to theory, knowledge, and methods from a broad array of academic disciplines including psychology, learning sciences, cognitive sciences, formal education, sociology, public health and anthropology.

Informal STEM Education is a young, emerging field.

This presents a challenge because everything is new! But this newness presents even more opportunities for creative research and evaluation.

  • Recent efforts have yielded new tools and resources for evaluators and practitioners that help to integrate evaluation into projects. These include new products (like this Guide and The User-Friendly Handbook for Project Evaluation); efforts to establish more consistent language and categories of impact (Framework for Evaluating Impacts of ISE Projects); and initiatives to store project outputs and evaluation reports in accessible and consistent places (such asInformalScience.org).
  • Evaluation in ISE contexts helps us broadly understand how lifelong and informal learning opportunities are contributing to an informed citizenry and scientific workforce, areas of increasing focus and importance from a policy perspective.
  • Evaluation in ISE contexts provides a unique contribution to our understanding of how people learn, which parallels and complements current research aimed at advancing knowledge within ISE and related disciplines.
  • Growing collaborations among ISE project developers and evaluators present tremendous opportunities to develop innovative evaluation methods, to understand and disseminate effective practices, and to develop unified ISE evaluation theory.

Guidelines for professional practice

Evaluators and evaluation are informed by guidelines for professional practice. The American Evaluation Association (AEA) Guiding Principles for Evaluators, the Visitor Studies Association's Evaluator Professional Competencies, and the Joint Committee Standards are described below.

Guiding Principles for Evaluators

The AEA principles are intended to guide the professional practice of evaluators and to inform evaluation clients about ethical practices that they can expect their evaluators to uphold.

American Evaluation Association (2004)

  • A. Systematic Inquiry: Evaluators conduct systematic, data-based inquiries.
  • B. Competence: Evaluators provide competent performance to stakeholders.
  • C. Integrity/Honesty: Evaluators display honesty and integrity in their own behavior, and attempt to ensure the honesty and integrity of the entire evaluation process.
  • D. Respect for People: Evaluators respect the security, dignity and self-worth of respondents, program participants, clients, and other evaluation stakeholders.
  • E. Responsibilities for General and Public Welfare: Evaluators articulate and take into account the diversity of general and public interests and values that may be related to the evaluation.

Evaluator Professional Competencies

The Visitor Studies Association has developed a set of five competencies that evaluators should have or demonstrate. The competencies are relevant to evaluators working in a variety of informal settings such as media, technology, and youth and community projects.

Evaluator Professional Competencies Visitor Studies Association (2008)

  • Competency A. Principles and Practices of Visitor Studies: Evaluators should be familiar with the history, terminology, past and current developments, key current and historic publications, and major contributions of the field. Evaluators should also be familiar with educational theory, environmental design, developmental psychology, communication theory, leisure studies, and marketing research.
  • Competency B. Principles and Practices of Informal Learning Environments: Evaluators must understand the principles and practices of informal learning, the characteristics that define informal learning settings, and an understanding of how learning occurs in informal settings. An understanding of the principles, practices, and processes by which these experiences are designed or created is required in order to make intelligent study interpretations and recommendations.
  • Competency C. Knowledge of and Practices with Social Science Research and Evaluation Methods and Analysis:Evaluators must not only understand but also demonstrate the appropriate practices of social science research and evaluation methods and analysis. These include: research design, instrument/protocol design, measurement techniques, sampling, data analysis, data interpretation, report writing and oral communication, human subjects research ethics, and research design, measurement, and analysis that shows sensitivity to diversity and diversity issues.
  • Competency D. Business Practices, Project Planning, and Resource Management:Evaluators must possess appropriate skills for designing, conducting, and reporting evaluation studies. They should demonstrate their ability to conceptualize an evaluation project in terms of scheduling, budgeting, personnel, and contracting.
  • Competency E: Professional Commitment:Evaluators should commit to the pursuit, dissemination, and critical assessment of theories, studies, activities, and approaches utilized in and relevant to visitor studies. Through conference attendance and presentations, board service, journals and publications, and other formal and informal forums of communication, evaluators should support the continued development of the fields of informal science education and evaluation.

Development and implementation of the Visitor Studies Professional Competencies was supported in part by grant No. 04-43196 of the Informal Science Education Program of the National Science Foundation.

Ethical Standards for Evaluation

The final set of professional guidelines, the Joint Committee Standards for Educational Evaluation (JCSEE 2011), include five ethical standards focused on the evaluation, as opposed to the AEA principles, which are focused on the evaluator.

Each standard is articulated in sub-statements and descriptive text, but in brief, the five standard categories are:

  • Utility Standards are intended to increase the extent to which program stakeholders find evaluation processes and products valuable in meeting their needs.
  • Feasibility Standards are intended to increase evaluation effectiveness and efficiency.
  • Propriety Standards support what is proper, fair, legal, right, and just in evaluations.
  • Accuracy Standards are intended to increase the dependability and truthfulness of evaluation representations, propositions, and findings, especially those that support interpretations and judgments about quality.
  • Evaluation Accountability Standards encourage adequate documentation of evaluations and a meta-evaluative perspective focused on improvement and accountability for evaluation processes and products.

Conclusion

While the positive benefits of evaluation are enormous, practitioners and evaluators must wrap some perspective around the limits of evaluation. For the most part, evaluation findings address only the questions that were originally asked by a project team. However, Principal Investigators, practitioners, and evaluators must interpret evaluation data, results, and findings in light of broader circumstances and contexts. Evaluation findings themselves do not directly make recommendations or decisions; rather, they inform recommendations and decisions. These limits of evaluation point again to the importance of collaboration and communication between evaluators and practitioners before, during, and after a project is designed and implemented.