Skip to main content

Enhancing Evaluation Capacity in Informal Science Education: Q&A with Alice Fu

The new website Enhancing Evaluation Capacity in Informal Science Education, funded by the Gordon and Betty Moore Foundation and developed by SK Partners, provides resources that address the challenges of measuring learning and evaluating impacts within informal STEM experiences and settings. Their summative evaluation framework links theory, research, evaluation, and practice, and is targeted to build evaluation capacity for the informal science education community, and the philanthropic community that supports ISE. The framework is a tool that can be used to reflect on summative evaluations through the intentions and goals of the informal STEM experience, the summative evaluation design, and the potential and actual uses of summative evaluations by stakeholders and the ISE community.

Our Q&A with Alice Fu, Educational Research Scientist with SK Partners, provides further insight on the resources found on www.informalscienceevaluation.org. For her PhD research in science education at Stanford University School of Education, she analyzed how educators at informal science institutions design and develop field trip program for schools. Dr. Fu currently manages a project on developing assessment and evaluation capacities in informal science education.


Evaluation capacity building has been a hot topic among evaluators, practitioners and institutions. Can you tell us about the needs and opportunities you identified in this area, and how those led to your current work?

Many of the needs and opportunities that we identified related to supporting smarter uses of evaluation. A particular area of concern was the use of summative evaluation; it is often mandated and resource-intensive, yet the practical payoffs are less obvious than with other types of evaluation, such as front-end or formative. We were motivated to find out how summative evaluation could be better used, perhaps for improving practice, informing decisions, or contributing knowledge to the field. Additionally, we wondered, how do we enhance capacity for doing so?

We initially set out to find “good” summative evaluations to examine as models. Predictably, as we began soliciting suggestions from colleagues, reviewing the literature, and analyzing evaluation reports, defining and locating “good” evaluations became an interesting inquiry in and of itself. And, that’s how we arrived at our current work.

For our readers, can you describe the resources you are making available on the website?

Our project has benefited tremendously from others who shared their insights and resources with us, and we see the website as a chance to pass the favor along. As the most direct, and fun, example of sharing insights, we have posted several transcripts from our interviews with leaders in the informal STEM and evaluation community; you can explore their perspectives under the Activities tab on the website.

Also under that tab are interview protocols, a coding sheet and worksheet for reviewing evaluation reports, and other research tools. For those interested in more distilled findings, we feature an overview of our Framework for Summative Evaluation, as well as papers and presentations from this project and other projects in evaluation and assessment.

Your work is aimed at two audiences, the informal science education community and the philanthropic community. How do you see the resources on the website informing, or supporting, those in the informal science education community who are interested in starting evaluation studies? And those that are experienced in evaluation? How do you hope that the philanthropic community will be able to use the resources on the website?

For both audiences, we hope that the framework-related resources will support them in identifying and considering indicators of quality in summative evaluations. I will be the first to say that the framework is not earth-shattering; its principles are known and practiced by experienced evaluators and users of evaluation.

Personally, I find the framework useful because of its brevity. For any given evaluation, I can quickly bring to mind the framework’s three dimensions. The framework prompts me to consider the intervention being evaluated, the intervention being defined as the program, exhibition, or other type of experience being evaluated (What do we know about the intervention, its underlying rationale, and how its evaluation fits into the informal learning landscape?); the evaluation methods (How is methodological rigor balanced against appropriateness for the context?); and the evaluation uses (Who are the users of the evaluation, and how well does the evaluation address their needs?). At the very least, I can take a quick but relatively comprehensive look at the evaluation; and, depending on my purposes, I can examine these and related questions more or less deeply.

For those in the informal STEM community, we posit that the framework and its guiding questions could serve as a lens for planning, conducting, or using evaluations. They might also use the framework to look back on a set of completed evaluations and consider their quality, as we did. For those in the philanthropic community, the framework could be used as a tool for setting expectations regarding the summative evaluations that are conducted by grantees. This could in turn facilitate and encourage philanthropic organizations to systematically review evaluation reports across multiple programs. Some of our current work is focused on investigating these potential uses and asking whether these hopes are reasonable and feasible.

I should add a caveat here: the website is not intended as a place to learn how to do evaluations or even how to use evaluations. Our framework is one way to think about evaluation, and we use the website as a place to share some of the resources that we developed from that perspective.

You describe the Summative Evaluation Framework as a way to “succinctly synthesize key elements that comprise a high-quality summative evaluation.” What are the key elements that make a high-quality summative evaluation? In your studies, why did they arise as the most important elements when compared to other characteristics of summative evaluations?

The framework has three key elements: Intervention Rationale, Methodological Rigor and Appropriateness, and Evaluation Uses. Intervention Rationale signals that any high-quality summative evaluation begins with a clear understanding of what program, exhibition, or other type of experience is being evaluated. This entails not only describing the intervention, but also examining its underlying rationale and connecting this logic (or theory of action) to what is already known about similar interventions. I think of this element as “doing your homework,” which helps immensely with identifying the critical questions or assumptions that bear closer examination. In some cases, it could prompt discussion of whether a summative evaluation is even warranted at this time; for example, digging into the literature could prompt project leaders to modify the intervention rather than proceed with a summative evaluation.

The second element, Methodological Rigor and Appropriateness, calls for the most rigorous designs and methods possible given the evaluation questions, available resources, and informal context. Summative evaluations often pose questions about impacts and seek tentative causal interpretations. It is a serious challenge to find methods that are both tightly linked to those types of questions and appropriately responsive to the unique demands of informal settings—for example, dynamic and diverse environments, unpredictable interactions, and participants’ freedom of choice.

The third element, Evaluation Uses, reiterates the importance of keeping stakeholders front and center. It demands an understanding of who might use the evaluation, what information they need, and how best to communicate with them. An experienced summative evaluator is in a unique position to link these three elements, situate the evaluation in the literature, and draw from their own expertise to provide critical perspective on the value of the intervention.

As we reviewed the literature, read evaluation reports, and talked to colleagues in the field, we found that many of the recommendations around evaluation best practices could be boiled down to these three elements. Some elements or principles that are critical to good summative evaluations may be integrated but not explicitly named in our framework. For example, the term “cultural competence” does not appear in the framework, but it is reflected in the framework’s emphasis on understanding and responding to what is being evaluated, the context in which the evaluation is conducted, and who might have a stake in the project. Of course, as with all frameworks, we did not do a perfect job of representing all that we wanted. It’s a work in progress, and one hope is that the website will encourage people to share their reactions. What is missing or misrepresented? What is over- or under-emphasized? How might it be improved?

Can you explain how creating a framework was an important step in building evaluation capacity in the community of Informal STEM Education?

We see it as a way to build shared understandings. People don’t view the world in the same way, and that’s a good thing. But, to move forward as a field, we also need ways to talk with each other. The framework provides one of many possible ways of looking at the world, particularly the world of evaluation in informal STEM education. It contributes to the ongoing conversation about what the community wants out of its evaluations—what are some markers of quality, and what are we striving toward?

The framework could be used to identify areas for building capacity. For example, in our review of summative evaluation reports, the Methodological Rigor and Appropriateness dimension of the framework helped us zoom in on a need for more measures that are seamlessly integrated into the informal learning experience, as opposed to those that pull participants outside of the experience. In turn, that points to a need for developing and recruiting measurement expertise to the informal community.

What are your current thoughts about whether and how summative evaluations are currently being used to either improve practice and/or provide evidence and support for the value of Informal STEM Education and learning writ large?

This is such a tough and complicated question, so I’ll address just a couple of points here. I believe that summative evaluations can be used to do those things—improve practice, and provide evidence and support for the value of informal learning experiences. However, it’s much easier said than done.

Related to improving practice, we conducted a case study on a summative evaluation done by Sue Allen at the Children’s Discovery Museum of San Jose. The report itself is excellent on many dimensions, which is why we chose to examine it further as a case study. It turned out that what was really remarkable was the way that the museum staff leveraged the evaluation and the information it produced to inform future decisions and strategic planning about major audience development initiatives. Summative evaluations are often thought of as an endpoint, but that perspective limits its potential for improving practice. We need more examples of how summative evaluations can be used to simultaneously look back on a specific intervention but also provide a starting point for future work.

As a result of our research, we see opportunities to collect new and different kinds of evidence to strengthen the value argument. A main purpose of summative evaluation is to measure impact; this presents an opportunity for going beyond descriptive studies and collecting evidence about cause-and-effect. We also tend to see a lot of self-report data, and that information is valuable because it provides firsthand accounts of the experience from the perspective of the visitor or participant. However, there are opportunities to collect data that are more behavior-based; these types of direct measures might be more easily embedded into the learning experience, compared to stopping people and asking them questions.

Your project is ongoing through 2016, what are your planned next steps now that the website is launched and in circulation?

We are continuing to look at practical uses of the framework. We are investigating how the framework might be used as a lens for planning and conducting evaluations, which follows our earlier efforts to use the framework for reviewing completed evaluations. We are working closely with some informal providers in the hopes of both gauging the utility of the framework, and continuing to learn about the messy realities and required tradeoffs of conducting summative evaluations in informal STEM education.

We are also investigating models for training and supporting evaluation professionals. There are many excellent evaluation workshops, books, and advice materials available to the informal STEM community, but we are pondering alternative ways to enhance quality and capacity.

We plan to continue improving and updating the website with new findings and resources. Feedback is always welcome! We are grateful to be part of the informal STEM education community, and we hope to continue contributing to the conversations around evaluation quality and capacity building.

Related Resources

A Framework for Summative Evaluation in Informal Science Education

Enhancing Evaluation of Informal Science Education: A Framework for Value

For more information about the website and its resources, contact Alice C. Fu, alice@skpartnersllc.com.

Posted by Patricia Montano