Design Evaluation

Evaluation is a set of approaches and techniques used to make judgments about the effectiveness or quality of a program or treatment; to improve its effectiveness; and to inform decisions about its design, development, and implementation (National Research Council 2010). For an informal STEM project, evaluation generally provides information that can guide the project, suggest how it might be improved, and provide evidence to demonstrate whether it worked as intended.

When evaluating informal STEM education experiences, four main kinds of evaluation are often considered:  

  • Front-end evaluation occurs during the project planning process. It often takes the form of audience research as it gathers data about the knowledge, interests, and experiences of the intended audience. 
  • Formative evaluation guides project improvement during the development process by gathering data about a project’s strengths and weaknesses that can be used to make improvements. 
  • Remedial evaluation is carried out when a finished exhibition or program first opens to see how all the individual components work together as a whole. The purpose is to see if any small changes need to be made before beginning summative evaluation, which focuses on a project’s overall effectiveness and impact. 
  • Summative evaluation is particularly important in making decisions about continuing, replicating, or terminating a project.

What Is the Difference Between Research and Evaluation?

Fundamentally, research advances theory — theories of learning, theories of learning design, theories of instruction, measurement, etc. Theory is a tool that can guide practice. For example, research that has developed theories of communities of practice can guide how we can structure, support, and evaluate efforts to induct new members into ISE learning communities. In some cases, research findings from an individual study can be directly applied or adopted in new settings. More frequently, findings are adapted into new settings, taking into account the particularities of local contexts.

Evaluation, on the other hand, provides information about particular approaches used in particular contexts with the purpose of improving the approach in that context. Evaluation is not typically meant to create generalizable knowledge, although methods from one evaluation may be adapted to another if relevant.