Evaluation 101: Everything you need to know to get started evaluating informal science education media

July 26th, 2008 | RESEARCH

You know almost exactly what you want to do to improve the public understanding of  science and  technology. But you don’t have much of an  idea about how  to  start  to  evaluate  your project,  to  improve  its  effectiveness, and then to prove its success. Evaluation 101 to the rescue. This workshop will begin with “Why do an evaluation?” and “What is an evaluation?” and quickly follow with “How would this work with a planetarium show, website, or television  show?” We will help participants identify the products or processes in their ISE initiatives. The rationale will include interactive discussions of the value of improving the product, communicating its impact or value, responding to questions about the initiative, clarifying the content and presentations  to better serve  the needs of  the audience, and building the next program or media product. The workshop will be based on the content of EvaluationSpringboard.org, an existing, freely available, and accessible website. Topics include creating a  logic model,  formulating  and  prioritizing  evaluation  questions,  human subjects  and  informed  consent,  identifying  evaluation  types,  identifying evaluation methods, planning for and collecting data, analyzing and interpreting data, and reporting and using findings. The labs match the content covered in the recent Framework  for Evaluating  Impacts of  Informal Science Education Projects.

Document

2008_PI_Summit_Workshop_Slides_Rockman_Borse_02.pps

Team Members

Saul Rockman, Contributor, Rockman et. Al.
Jennifer Borse, Contributor, Rockman et. Al.

Funders

Funding Source: NSF

Tags

Audience: Educators | Teachers | Evaluators | Museum | ISE Professionals
Discipline: Education and learning science | General STEM
Resource Type: Presentation Slides | Reference Materials
Environment Type: Media and Technology