Skip to main content

Evaluation of Media Projects: An Analysis of Reports Contributed to InformalScience.org

Today from the BISE (Building Informal Science Education) project, we explore what might be learned by examining the subsection of evaluation reports on informalscience.org that focus on informal science education (ISE) Media. We selected 101 evaluation reports that focused on some aspect of media, broadly construed. In most cases, the reports are focused on broadcast television shows or films, but there are also large format film, radio, and print examples.

Media evaluation reports fell into five main categories. These include evaluation reports of children’s television programming (with 56 reports, it was the largest category) for shows including Cyberchase and Dragonfly TV. Twenty-five reports concerned documentary film projects for television, such as Origins of the Universe and Journey to Planet Earth. Nine reports dealt with adult-directed television programming (regular news series programs), such as Nova ScienceNOW, and KQED Quest. Six reports evaluated large format video, or Science On a Sphere programs, and five reports documented radio/audio programs or print media.

Methods Used

As in other project types represented on informalscience.org, evaluation reports often describe multiple studies of different components within larger NSF-funded projects. For example, a report might document aspects of a summative evaluation of an exhibition as well as an evaluation of a related web-based experience or a teacher professional development workshop.

Responding to the unique needs of media projects, evaluators have often developed specialized evaluation questions and methods. A relatively small group of evaluators have become experts in media, and have produced much of the evidence base: Although 20 different groups uploaded media reports to the site, just four groups were responsible for more than half of the reports in this sample.

Surveys, focus groups, interviews and observations of audiences engaging with media are among the most common methods used in these evaluations. Surveys are the dominant method of data collection used in media projects; multiple methods are commonly used, and interviewing is also very common. Children’s television programming reports provided the broadest range of study styles posted in the media sample: lots of studies used interviews, but novel observation and task-centered experiments were also reported. Control group studies as well as pre-and post-tests were also frequently used in media reports.

As media projects are often broadcast to a large audience spread across a wide geographic area, one of the core challenges of evaluating media is finding participants to study. Evaluators attend to geographic locations and demographic information to try to locate typical audiences targeted by television stations and shows. For example, for the evaluation of The Shape of Life documentary, the evaluator sent questionnaires out to the member lists of five aquaria from diverse parts of the country (Knight-Williams, 2003), while in another evaluation, the sample was compiled from the firm’s database of museum visitors, PBS viewers and viewers of other science nature and history programming who had participated in no more than two prior studies in the past two years and who fit the screening criteria for the project (Knight-Williams, 2010).

Exposure is another challenge for media evaluation. Studies vary in whether audiences have viewed programming on their own or whether they are asked to view the program as part of the study process. For children’s programming, groups of children in classrooms might view segments, with follow up surveys or interviews. In some cases, parents might also be asked to provide at-home logs and diaries of segment watching (e.g. MCG Research & Consulting, 2002). For adult programming, participants self-report viewing of the target media (e.g. Peterman, Pressman & Goodman, 2007), or in some cases, target DVDs are mailed out for viewing (e.g. Knight-Williams, 2010).

Exposure and sampling are two challenges encountered throughout ISE evaluation. Evaluators wrestle with what it means to have a “naturalistic” experience in an informal setting. The media reports show different strategies for thinking about these issues than we see in other segments of the evaluation reports.

Formative Evaluation

Some of the most rigorous (and clever) examples of how formative evaluation shapes projects can be found in the media segment posted on informalscience.org. And there are many to learn from. In the children’s programming segment, for example, nearly half of the reports posted were formative (compared with 20% across all of the reports posted to informal science). Potential audiences have been extensively consulted to determine aspects of the design that are visually appealing, confusing, or preferred (e.g. Flagg, 2002). Short clips or illustrated storyboards are shown to members of the target audience to test for appeal and concept comprehension For example, in an evaluation of the children’s program Cyberchase, pre-k, kindergarten, and first graders were tested for comprehension after viewing three episodes. Children were observed during the viewing and were interviewed later about their understanding of the narrative as well as mathematical concepts and the problem solving steps undertaken by the characters. Evaluators were able to pinpoint age-related comprehension issues and character preferences (MCG Research & Consulting, 2002). In the children’s programming segment of media, formative evaluation is seen as a highly valued part of decision-making. There is a wealth of evidence here of how producers use evaluation data to strategize and then test how children engage with, and learn from the content they wish to share.

Documenting Learning & Impact?

Image taken from Evaluation of Peep and the Big Wide World

Among the sample of media reports, we found clear documentation of informal STEM learning impact—control group studies, pre-post designs and follow-up studies have been conducted to show changes in audience knowledge of STEM. Media evaluators are skilled in working with children, even at a young age, to find out what they grasp from watching a television program. In an evaluation of Peep and the Big Wide World, evaluators set up a quasi-experimental study to look at how children might learn science inquiry and process skills. Providing an activity similar to one used on the program, they found that 64% of children aged 3-5 versus 14% of the control group modeled science process skills that they’d seen on the show. The study used parent viewing diaries and surveys to better understand the children’s viewing habits (Beck & Murack, 2004).

Image taken from "400 Years of the Telescope"

A challenge for media evaluation is the difficulty in assessing media exposure in a naturalistic setting. Another challenge is that NSF-funded projects rarely consist of one single product. Recently some ambitious evaluations have begun to address issues in both of these areas to look at the cumulative impacts of engaging in multiple experiences developed by projects in a more naturalistic setting. For example in a study of 400 Years of the Telescope, a project that included a one-hour PBS documents, a 22 minute planetarium program, a project website, “Star Parties” nighttime viewing events, and promotional events held by PBS affiliate stations, evaluators looked at the cumulative impacts of engaging with multiple components of the project. (Yalowitz, Foutz, Danter, 2011). In a sample of just over 1000, they found that 29% engaged in more than one deliverable experience from the project. The team looked at the impact of the order in which experiences occurred as well as individual and combined outcomes for the experiences. Multiple experiences did produce a higher outcome (even though they did recognize that some participants were already actively interested in the astronomy).

Among ISE media reports, one can find good examples of follow-up studies, but most of the studies document changes shortly after exposure, perhaps a few months at most. What is missing from the media reports is the documentation of those life-changing experiences, and impacts that we all feel are true. Interviews with scientists often reveal the pivotal role that a media experience (i.e. Carl Sagan’s Cosmos, Wild Kingdom, Bill Nye) played in inspiring someone to pursue the study of science. We don’t yet have systematic evidence in the field to document these impacts. While it may be too much to ask of any single evaluation to fill that kind of gap, perhaps it highlights the need for additional evidence that may be beyond the scope of studies that are funded as part of the average three ISE media project.

Next Steps

As noted in Learning Sciences in Informal Environments, the field is still awaiting a large-scale comprehensive look at how people learn from the media (Bell et al, 2009; Rockman, Bass & Borland, 2007). This group of evaluation reports illustrates some of the diversity of work going on in the media sector, but clearly much work remains to be done. In the media reports, we find excellent examples of how evaluation helps to improve the design of media projects. Short-term gains in learning, interest and attitudes have also been well documented. We know that media has an impact on STEM learning, and looking beyond the project-based evaluation study, the field would be well served to investigate the ways in which media experiences might be contextualized within a larger ecology of STEM experiences. Next steps also include utilizing what the field understands about media experiences to explore interactive and non-broadcast media in closer detail. New media projects are, at this point, underrepresented in the evaluation report database.

How You Might Use Our Resources

The BISE team has coded over 500 reports posted to informalscience.org. In addition to sharing our Coding Framework, we will share our NVivo database and related spreadsheets. You’ll be able to search these resources to find evaluation reports that have certain characteristics.

Looking for a summative evaluation report that uses timing and tracking in an exhibition? We’ve made it easy. How about questions people use when they’re trying to find out what people know about a potential exhibit or program topic? You can find that too!

References

 

Posted by Karen Knutson