Skip to main content

The ActApp: Sharing Research and Evaluation Tools across the ISE Field

This post was written by Kalie Sacco, Lawrence Hall of Science, Research Associate.

 

Picture this: you are the lead facilitator at an afterschool program and a first-time grant writer. You are asked to talk about the effectiveness of your program, but you’ve never been able to systematically evaluate how effective it is before. Or, imagine you are a researcher who has just joined an exciting learning experience design project, but with just one week before the program starts, you do not have time to create and validate research instruments to use to gather data about program participants.

As an embedded research and evaluation team at UC Berkeley’s Lawrence Hall of Science (the Hall), we are not strangers to these situations. We have seen our colleagues encounter them many times--and, we have been there, ourselves! This article describes our efforts to respond to these common scenarios: the creation of a research and evaluation toolkit that we named “ActApp.”

What does ActApp measure?

For over six years, a team of researchers at the Activation Lab--a collaborative effort of the Hall and the University of Pittsburgh--have been developing a construct known as Activation. STEM Learning Activation refers to the combination of dispositions, practices, and knowledge that enable success in proximal Science, Technology Engineering and Math (STEM) learning experiences. As learners move through such an experience, they can become more or less “activated” towards and thus, more or less likely to be successful in future STEM learning experiences. In this context, “success” is defined as learners choosing to participate in further learning opportunities when presented, showing positive engagement during learning experiences, and achieving the learning experience’s goals.

To date, researchers have investigated five dimensions of the construct of STEM learning activation :

  • Fascination with natural and physical phenomena: The learner’s interest and positive affect toward STEM; curiosity about STEM subjects; and goals of acquiring and mastering STEM skills and ideas.
  • Values STEM for self or society: The learner’s perceived importance placed on being able to know or do STEM-related work because of the topics’ utilities in being able to meet personal goals and benefit society.
  • Competency Beliefs in STEM: The learner’s beliefs about their ability to successfully participate in diverse learning situations as well as their beliefs about having core skills.
  • Innovation Stance: The learner’s enthusiasm for new STEM-related ideas, for trying new ways of doing things in STEM, and for sharing STEM ideas with others.
  • Scientific Sensemaking: The learner’s interaction with science-related tasks and text as a sensemaking activity using methods generally aligned with science such as asking good questions, seeking mechanistic explanations for natural and physical phenomenon, engaging in argumentation about scientific ideas, interpreting data tables, designing investigations, and understanding the changing nature of science.

Activation Lab researchers developed the surveys (that would eventually be used in the ActApp), by testing and refining them with thousands of students and learners in the San Francisco Bay Area and Pittsburgh communities. After much design, testing, analysis, and re-testing, we arrived at a set of survey instruments and observation protocols that accurately measured change for 10-14 year olds in the dimensions of Activation across a wide range of learning settings. We then utilized these instruments in large-scale research efforts to establish the value of the construct, dimensions, and instruments in understanding and predicting what positions youth for success is STEM learning. These research and development efforts clearly pointed to the utility of the measures we developed and potential for using these instruments across many contexts in ways that contribute to research on what we know about what “works” in learning.

Our experience developing and using these measures led us to think they would have high value for evaluators or experience designers who want to measure the change in learners in the dimensions of Activation to predict if they become more activated toward science or STEM after participating in a learning experience. These dimensions can be measured separately or together; when measured together, analyses can disaggregate the dimensions to see how individual respondents scored for each dimension. The dimensions of Activation can be measured in both formal and informal learning contexts. Because we designed these instruments to be flexible and modular, and had used them across a wide range of environments and experiences, we knew that they would be applicable to a wide range of STEM learning programs currently in play across the field.

Developing the ActApp

Once we knew these instruments would be of value to the wider STEM learning field, we set out to determine how to best share them broadly. Two underlying factors motivated us: first, we knew from our own work as learning experience evaluators and designers that evaluation can be challenging for many high-quality learning institutions. In particular, programs may not have sufficient budget to develop valid and/or customized instruments. Second, as learning researchers, we believed that the question of “what works” in STEM education is best answered by using robust research instruments across a range of settings. We imagined that the best way to accomplish the goals of helping evaluators, researchers, and program designers alike (who may not have the means to invest in full-service evaluation or assessment services or expertise) was to create a comprehensive toolkit that would share our instruments, provide guidelines for their use, and offer the opportunity to work closely together with users. We believed that providing valid, reliable, accessible and easy to use tools that measure outcomes that matter would increase the quality of research on STEM learning and evaluation of STEM learning experiences. Thus, the idea of the “ActApp” was born.

With funding from the U.S. National Science Foundation PRIME program, we first initiated a needs assessment to make sure that our assumptions about what the field needs with regards to research and evaluation support were grounded in fact. Based on our conversations and surveys with stakeholders from across the field, we identified three things that would be essential for our toolkit:

  • We would need to provide a range of support for toolkit users; some users would need little help in understanding and using the instruments, others might require technical assistance with using the instruments, others might need support designing an evaluation study, etc.;
  • Users would need the option to access paper and web-based surveys (which could be offered online at a live link, or collected offline and uploaded later); and
  • The surveys would need to be generated and sent to users automatically, with no intervention from our team--but we would still need a way to track who was using our surveys.

We consulted Dr. Tapan Parikh from UC Berkeley’s School of Information (now with Cornell University), who had experience creating survey systems for use in resource-limited settings, to identify several technical options for meeting those needs. After several rounds of testing and retesting, we launched a prototype of our Survey Construction Tool that works like this:

  1. Users fill out a short Google Form that tells us basic information about their program, lets them select either paper-based or web-based surveys, and lets them select the survey modules that they want to use. In addition to surveys that measure the dimensions of Activation, users can also select related instruments that have been developed in our research efforts (such as surveys that measure engagement and career preferences, and a demographics survey).

  2. If users select the paper-based survey option, PDFs of the survey instrument(s) are sent to their email address automatically.

  3. If users select the web-based survey option, a custom survey link is generated and sent to the user via KoboToolbox. Kobo is an open-source, survey management system that was developed for use in resource-limited environments. Users can distribute the link to survey respondents, as well as view and download responses through their Kobo account.

We then worked with a group of potential ActApp users--educators, researchers, and evaluators--to test the system. Working closely with us, users piloted the Survey Construction Tool to generate a paper survey or survey link. They then administered the survey at their program. In several follow-up sessions, we co-analyzed the data with users to help them draw conclusions about their programs and generate questions for follow-up evaluation. These conversations were instrumental in developing the comprehensive instructions that accompany the Survey Construction Tool to comprise the complete (for now!) ActApp toolkit. The toolkit is available at www.activationlab.org/toolkit.

Next Steps

Although our NSF funding (Award # 1348666) to develop the website has ended, we are continuously refining and expanding our understanding of activation through other research efforts. We are also thinking more about the best way to share the website across the field and thinking about how to create new support mechanisms, like trainings. If you’re a current or potential ActApp user, we’d love to hear any ideas you have on how to share and use this information--please drop us a line at info@activatonlab.org.

We will also be presenting a poster at the upcoming ASTC annual conference in San Jose, CA on Sunday, October 22nd at 10:15. If you’re at the conference, please stop by to say hello and learn more about activation and to check out ActApp.

Further Resources

A full list of all publications, including conference presentations, are available at http://www.activationlab.org/research/. To get started in learning more about Activation, we recommend reading:

Posted by Kalie Sacco