Skip to main content

Discover Research

There is a wide body of research applicable to the design, implementation, and assessment of informal STEM education and a growing body of research specifically applicable to science communication and engagement activities.

These studies span approaches to and outcomes of:

  • Learning in everyday or naturally occurring activities, for example, at home, on the job, or in the community.
  • Design, implementation, and use of resources to support self-directed learning, including family or intergenerational learning, for example, gaming, film, broadcast media, and museum exhibits.        
  • Supervised out-of-school time STEM learning programs, for example, afterschool programs, summer camp, field trips, and apprenticeships.
  • Exchanging information and perspectives about science to achieve a goal or objective.
  • Designing and studying the effects of science messages and broader communication campaigns on attitudes, norms, efficacy beliefs, and behaviors.
  • How historical, cultural, technological, psychological and social forces shape communication about science and its impacts.

There is also research on infrastructure that can support Out of School Time (OST) learning.  This includes policies, professional development and preparation, evaluation and assessment, and cross-sector collaboration.

What Is the Difference Between Research and Evaluation?

Fundamentally, research advances theory -- theories of learning, theories of learning design, theories of instruction, measurement, etc. Theory is a tool that can guide practice. For example, research that has developed theories of communities of practice can guide how we can structure, support, and evaluate efforts to induct new members into ISE learning communities. In some cases, research findings from an individual study can be directly applied or adopted in new settings. More frequently, findings are adapted into new settings, taking into account the particularities of local contexts.

Evaluation, on the other hand, provides information about particular approaches used in particular contexts with the purpose of improving the approach in that context. Evaluation is not typically meant to create generalizable knowledge, although methods from one evaluation may be adapted to another if relevant.