Skip to main content

Considerations of Measuring Engagement in Informal Contexts

This article was migrated from a previous version of the Knowledge Base. The date stamp does not reflect the original publication date.

Overview 

Broadly speaking, engagement is associated with motivation, persistence, achievement within and outside of science (Fredricks, Blumenfeld, & Paris, 2004; Wang & Eccles, 2010; Ainley & Ainley, 2011, Pekrun & Linnenbrink-Garcia, 2012; Stewart, 2008; Tytler & Osborne, 2012). However, what we mean by “engagement” and the methods we use to assess it, vary considerably. While the work on engagement in science experiences is informative and productive, the inconsistency of the term “engagement” may lead to difficulty in synthesizing the findings across contexts and literatures. A recent special issue in Educational Psychologist (Vol. 50, Issue 1) begins to tackle this issue and provides a great review of these complexities for those broadly interested in the engagement literature. The current article serves to raise some of these issues and provide an example of how a set of researchers addressed these issues through their assessment. This case example is provided to highlight one solution and not intended to be seen as best practice. I encourage other researchers to add their examples, as well.

Findings from Research and Evaluation 

Engagement research has been growing in frequency over the past decades (Sinatra, 2015) and with good reason. Understanding learners’ engagement can provide insight into their experience with science content and activities, is predictive of future success in science (e.g., Ainley & Ainley, 2011; Tytler & Osborne, 2012), and is versatile in its research and evaluation application. For example, when measured at small grain-sizes (e.g., one activity), it can provide formative feedback to evaluators as to how learners’ are experiencing a particular activity. Alternatively, engagement can be used as an input for measuring changes in learners’ motivation towards science over time (Sha, Schunn, Bathgate, & Ben-Eliyahu, 2015; Bathgate & Schunn, in review).

This diversity provides rich research opportunities, but also leads to complexity when synthesizing findings or selecting an approach for one’s informal assessment needs. For example, the report by Fredricks et al (2011) reviewed 21 instruments measuring learner engagement and found these measures to vary by factors such as the subdimensions of engagement they measure (emotional, cognitive, behavioral), the level of context they measure (e.g., items asking about a particular science activity vs. asking about science more generally), and the additional elements they include in their assessment of engagement (e.g., self-regulation).

There are also different views on the most productive methodology to assess engagement. That is, should engagement be measured by a reflective self-assessment completed by learners in order to gather subjective experience or an observational method that allows for a more objective (theoretically) view based on particular behaviors undertaken by the learner, for example (Minner, Levy, Century, 2010). Obviously the selection of a method depends strongly on the particular context and research question being addressed, but raising the awareness of these inconsistencies and measurement constraints can better support both the synthesis of research and the practical selection of assessments and methodologies for measuring engagement in STEM education.

Adding to this complexity is the challenge of measuring engagement within informal spaces where learners may be “dropping in” for a couple hours and possibly not return. In those cases in particular, gathering data on those learners in order to gain formative or summative assessment needs comes at the cost of potentially interrupting the experience of the child, which is one argument made for observational or time sampling methods of engagement.

Additionally, asking the question of what it means to be engaged in “science” specifically adds an additional layer to this issue. That is, if we want to know learners are engaged in a science activity (e.g., doing the things the activity requires, having a positive emotional experience) we may be asking a different question than if we are asking whether the learner is engaged with the practices and skills of science (e.g., generating evidence, asking investigable questions). (See Sinatra, Heddy, & Lombardi, 2015 for more details on this idea). It may not be enough to assume that if a child is engaged in an activity that involves science concepts that they are therefore engaged in the particular skills associated with authentic science practices.

Researchers have taken multiple approaches to address these issues and I present one such approach here and outline some of its benefits and drawbacks. This example is not necessarily presented as ideal practice, but rather showcases how one set of researchers have approached these complexities. I invite others to add their examples, as well.

One approach to this is exemplified by the work of the Activation Lab. The approach used by the Activation Lab (http://www.activationlab.org/tools/) is to measure learners’ engagement via a brief survey taken immediately following a particular activity (Sha, Schunn, Bathgate, Ben-Eliyahu, 2015; Bathgate & Schunn, in review). This approach positions the assessment in at least four ways. First, it measures the subjective experience of the learner in order to assess the more internal thoughts and experiences of the learner (e.g., their emotional and cognitive engagement that may not be as clearly observable in particular contexts, such as a class). Second, because it is subjective and survey based, there is no need to train researchers to observe the learners. This approach may be particularly beneficial under circumstances where adults in the learning space may inhibit the design of the activity (e.g., learners are given high autonomy and little guidance) or lead the child to feel evaluated during their experience. Third, it measures learners’ engagement as close to the occurrence of the activity itself without interrupting it (i.e., directly following completion of the activity), thus avoiding retrospectively asking about engagement, which decreases memory biases. Finally, the assessment asks about learners’ engagement at the level of a single activity, which can be particularly useful for formative evaluation.

However, there are drawbacks to this approach, depending on the goal of the research being conducted. For example, if researchers have the goal of understanding learners’ engagement with science more broadly (i.e., outside of a particular experience), the variance that can occur in learners’ engagement across particular activities may be an issue if engagement is measured only once using this approach. Since activities vary in their context, topic, and process across time, learners’ engagement may vary in a systematic way along with some element of those activities (e.g., topic) (Bathgate, Schunn, & Correnti, 2013). As such, if one chooses this smaller grain-size, measuring engagement at multiple time-points (when possible) and creating some type of composite across the surveys (e.g., average) may be needed. Additionally, this assessment does not measure the quality of the activity, but is solely reflective of the subjective experience of a learner. As such, it is not assumed that because a learner is highly engaged that they are experiencing active, authentic, in-depth scientific processes. This is typical of many engagement measures, but the distinction should be understood.

Before selecting a method for measuring engagement in informal spaces, one should consider the constraints of measurement within the context they study. Perhaps this statement seems overly obvious, but being sensitive to the impact of measurement within a context and how it may influence the learner's’ experience is often a trade-off towards meeting one’s research goals. Choosing a methodology should be done carefully to limit the disruption of the learning environment balanced with the needs of the assessment.

Directions for Future Research 

(See Sinatra, Heddy, & Lombardi, 2015 for a great summary of considerations in measuring engagement in science.)

References 

Ainley, M., & Ainley, J. (2011). Student engagement with science in early adolescence: The contribution of enjoyment to students' continuing interest in learning about science. Contemporary Educational Psychologist, 36, 4-12.

Azevedo, R. (2015). Defining and measuring engagement and learning in science: Conceptual, theoretical, methodological, and analytical issues. Educational Psychologist50(1), 84-94.

Fredricks, J., McColskey, W., Meli, J., Mordica, J., Montrosse, B., & Mooney, K. (2011). Measuring student engagement in upper elementary through high school: a description of 21 instruments. (Issues & Answers Report, REL 2011–No. 098). Washington, DC: U.S. Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance, Regional Educational Laboratory Southeast. Retrieved from http://ies.ed.gov/ncee/edlabs

Greene, B. A. (2015). Measuring cognitive engagement with self-report scales: Reflections from over 20 years of research. Educational Psychologist50(1), 14-30.

Pekrun, R., & Linnenbrink-Garcia, L. (2012). Academic emotions and student engagement. In S. L. Christenson, A. L. Reschly, & C. Wylie (Eds.), Handbook of research on student engagement (pp. 259-292). New York: Springer.

Renninger, A., & Backrach, J.E. (2015). Studying triggers for interest and engagement using observational methods. Educational Psychologist, 50(1), 58-69.

Ryu, S., & Lombardi, D. (2015). Coding classroom interactions for collective and individual engagement. Educational Psychologist50(1), 70-83.

Sha, L., Schunn, C., Bathgate, M., & Ben-Eliyahu, A. (2015). Families support their children’s success in science learning by influencing interest and self-efficacy. Journal of Research in Science Teaching. DOI: 10.1002/tea.21251

Sinatra, G. M., Heddy, B. C., & Lombardi, D. (2015). The challenges of defining and measuring student engagement in science. Educational Psychologist50(1), 1-13.

Tytler, R., & Osborne, J. (2012). Student attitudes and aspirations towards science. In B. J. Fraser, K. Tobin, & C. J. McRobbie (Eds.), Second international handbook of science education (pp. 597–625). New York, NY: Springer International.

Wang, M.T., & Holcombe, R. (2010). Adolescents’ perceptions of school environment, engagement, and academic achievement in middle school. American Educational Research Journal, 47(3), 633-662.

Posted by Meghan Bathgate