»Find references in our database written by Randi Korn
"What I mean by using evaluation as a learning tool is that ... evaluation becomes a means to an end, rather than the end."
"Evaluation is a skill–based profession. For me, evaluation is not a simple–minded activity that anyone can do."
"In my practice, I use inquiry all the time. That is, I ask a lot of questions to deeply understand the intent behind the project. Evaluators need to have a thorough understanding of a project if they are going to evaluate it. I also ask questions about data to understand it better. I learn by asking questions."
"It is so hard to design with impact in mind because achieving impact suggests changing human behavior."
"... when looking at the impact categories of engagement and behavior, I had a breakthrough. I realized that if I added the word "peer" to "engagement" and the word "group" to "behavior," then thinking about peer engagement and group behavior allowed me to enter what the online community seems to be about."
"Museum professionals rarely discuss the intrinsic value of what they do and intrinsic quality of museums."
"Methodologically it's extremely difficult to measure the effectiveness of an online project ... I think it is challenging to have valid and reliable data."
"It's important to realize that learning never ends. I think we all know that, but then we need to also change our own practice to keep up with our learning."
Interview by Sasha Palmquist :: Summer 2008
Palmquist: How do you define your professional field?
Korn: I define it as evaluation, but I like to think about evaluation in the context of the whole organization. In other words, the field of museum evaluation has done so much to help practitioners learn about how to create great exhibitions and programs, but evaluation also needs to serve the whole museum, and currently it doesn't. More often, evaluation lives only in the program realm of museums. So I like to use evaluation as a tool to help the whole museum achieve its mission and have impact.
Palmquist: What are some of the challenges then in achieving this goal?
Korn: One challenge is to understand what it means to raise evaluation to an institutional level. It doesn't mean institutionalizing evaluation, which means that museums conduct evaluation. Doing evaluation only at the program level isn't helping everyone who works in the museum. However, using evaluation as an organizational process can bring people from across the museum together around the mission, and the organization can become more mission driven and mission focused, enabling the organization to function better. A lot of practitioners tell me they work in silos. I believe evaluation, if designed to examine the impact of the whole museum, can break down the silos.
Palmquist: What are some challenges that the evaluation field faces in terms of professional development?
Korn: A major challenge is that evaluation is grossly misunderstood. It's perceived as the lowly stepchild when, in fact, it's actually applied research. Even though evaluation is often project specific, there is always an opportunity to learn from evaluation. However, I find evaluation is rarely used as a learning tool. More typically, it's used as a judgment tool–where people want to know if visitors like or dislike something. What I mean by using evaluation as a learning tool is that evaluators conduct an evaluation to determine the ways in which a program, exhibition, or museum achieves its intentions and, in the process, uncover the nuances and qualities of visitors' experiences; in turn, practitioners reflect on what they did and how visitors experienced what they did and use that information and their reflections to improve their practice. In this way, practitioners become learners. Evaluation becomes a means to an end, rather than the end.
Palmquist: In what ways can the web be used to support professional development? I'm curious about how you might personally use the web in this way, as well as how you think others might use the web for professional development.
Korn: First, let's talk about the ways that others might use the web for professional development. Any website about evaluation would need to demonstrate that evaluation is a serious field of study. Evaluation is a skill–based profession. For me, evaluation is not a simple–minded activity that anyone can do. Sometimes I feel a tension between helping young professionals become good evaluators and supporting, for example, museum educators—or anyone who has another job—looking for guidance from a website so they can conduct an evaluation. That's a difficult tension. It may be helpful to frame the problem by using an analogy. No one should trust me to go into a natural history museum to teach biology. I love natural history museums and I am intensely curious about the natural world and biology and love looking at specimens, but does that mean I can teach biology? I have met practitioners who become frustrated with me because they don't understand why they shouldn't conduct an evaluation without any training to do so. Sure, anyone can do evaluation; no one is stopping them. However, I hope that no one ever asks me to teach biology, or art history for that matter, because I am not trained to teach those subjects.
With respect to my own professional development, in addition to being interested in organizational development, advancement, and change, I am also interested in using evaluation to help museums make a difference in people's lives. There are schools of thought where many people think about evaluation outside the museum sector. There's some evidence that these communities struggle with the same things that interest me—especially in terms of raising evaluation to a higher level so that it functions as a learning tool within organizations. It is very useful to read journals from the American Evaluation Association and from the philanthropic world because this social sector has been struggling with evaluation for a lot longer than museums have. So if a website was to embrace or raise awareness about other pockets of practitioners struggling with similar issues—that would be valuable. Providing connections and access to those other communities might educate us about helping to raise evaluation to a higher level so it could function as a learning tool. Knowing there are other perspectives on how to make an organization function more effectively, as well as how to make a program or an exhibition better, would be useful to me personally. And I hope that my evaluation colleagues would find it useful as well.
Palmquist: Yes, that would definitely be useful. Could you describe how that kind of resource might be designed to provide access to the most useful information?
Korn: The idea would be to provide a way for practitioners from other sectors to connect and share their perspectives around common issues. For example, how do other sectors define the concept of impact? What suggestions do practitioners from other sectors have for inspiring management to consider evaluation practice as a valuable learning tool? If the goal of a website it to help museum practitioners become better evaluation practitioners, it would be useful to have links to other communities of evaluators that are struggling with these larger organizational issues. Raising awareness of other communities, associated literature, and people who have dedicated their own professional lives to organizational effectiveness and using evaluation as a learning tool could be extremely valuable.
Palmquist: Do you see a unique value or opportunity for a web–based community where people exchange ideas, grapple with issues, or share strategies?
Korn: Yes, of course. Although I prefer to network privately and am not one to outwardly participate in these designed social networks, I fully understand not everyone is like me. I see the value for people to publicly network. Maybe I will someday, but right now, I prefer to do so privately or face–to–face when possible.
Palmquist: What advice might you have for young evaluators who are interested in measuring the impacts of online communities?
Korn: A lot of people do come to me with questions, but no one has ever asked me about evaluating online communities! One important piece of advice—regardless of the medium—is to remember that there is a strong relationship between planning and evaluation. You need to inspire whomever you're working with to be really clear about what they want to achieve and to seek clarity up front. Peel back the layers until you begin to see the essence of what you want to do. Without that clarity, your evaluation will be meaningless and not at all useful. Finally, make sure that you start an evaluation process at the right time, which is at the beginning of a project. I like to say, "Begin with the end in mind."
Palmquist: How far in the beginning do you think is the right time? Is it, we have an idea and we want to develop a relationship, or we're not quite sure, but we would like to have the perspective of an evaluator to help us make sure that we are talking about measurable impacts?
Korn: It could be either scenario. In the first example—where the idea was more complete—the evaluator could ask probing questions to further clarify the idea. However, in both cases, the evaluator could assist by asking questions. Questions often help people hear what they are thinking. In my practice, I use inquiry all the time. That is, I ask a lot of questions to deeply understand the intent behind the project. Evaluators need to have a thorough understanding of a project if they are going to evaluate it. I also ask questions about data to understand it better. I learn by asking questions.
Palmquist: And does a deep understanding include being familiar with the broader community? For example, if it is an online community being developed, does the evaluator need to have an understanding of that community and its audience?
Korn: Actually, I think so. As an evaluator, I can add value to a project because I know museums and I know museum visitors. I cannot add value to a project if I don't have a good understanding of the potential online community. To do that, I would want to review some literature first before answering your question because I like to have a grasp of a particular content before I can be useful and ask the right questions. So I'm suggesting that I might not know the right questions to ask right now if it has to do with evaluating an online community.
Palmquist: You served as a member of the panel that developed the new NSF Framework for Evaluating Impacts of Informal Science Education Projects. What were some of the challenges during the creation of the document? And what are some of the challenges of implementing the framework?
Korn: In terms of the challenge in creating the document, it felt a little counterintuitive to organize the book by medium or audience (e.g., exhibition, media, community, etc.) as most projects are very large and include many of these components. There are some basic tenets to evaluation and I think neither medium nor audience matter. Saying that—maybe I'm contradicting myself from an earlier answer—I could participate in evaluating an online community if we are dealing with the basic tenets of evaluation. These basic tenets have to be well understood, whether it's a community program, an exhibit, a movie, or an online community. So I would have much rather seen a few chapters designated to the relationship between planning and evaluation to really drive that home. In my opinion, it isn't driven home nearly enough, but there may be other opportunities—meetings, workshops—to communicate the potential learning opportunities associated with evaluation.
Palmquist: And what is the biggest challenge of implementing the frameworks?
Korn: The biggest challenge is not so much implementing the framework but rather, making a difference in people's lives. It is so very hard. This is the challenge that museums face. It is the challenge of every single NSF–funded program because NSF's goal of is to make an impact. It is so hard to design with impact in mind because achieving impact suggests changing human behavior. This challenge is also the same for organizations. After all, organizations are comprised of people and people have habits that are really hard to change.
Palmquist: In an earlier discussion about the framework, you identified impact and designing for impact as something particularly difficult. Why is that?
Korn: For one thing, it has been really difficult for people to describe and to clearly articulate what they mean by success. Articulating success for an online community project is particularly difficult because anyone can enter the online community. And the online participants may not be the audience the project was designed to support.
In terms of specific feedback on the framework, it seemed to me that something is missing when the framework is considered in the context of online communities. However, when looking at the impact categories of engagement and behavior, I had a breakthrough. I realized that if I added the word "peer" to "engagement" and the word "group" to "behavior," then thinking about peer engagement and group behavior allowed me to enter what the online community seems to be about. So the framework may need adjustments if it is to be useful for the evaluation of online communities. However, there is enormous flexibility in using the framework. Although you may be confined to behavior as an impact category, you can expand what you mean in any particular case. In terms of an online community, it might be "group behavior" and here's how we define it. That was a big breakthrough for me. By reframing this, I realized the possibilities of what you can begin to measure and what you can begin to clarify as impact or success.
Palmquist: That's critical because one struggle with creating working definitions is establishing the difference between an outcome and an impact. How do you negotiate between them?
Korn: And what's the difference between an output and an outcome? The museum community is stuck on outputs. Historically museums have focused the discussion about success on numbers of visitors, numbers of objects or exhibits, and amount of dollars raised—all of which are examples of outputs. Everyday I try to shift people's thinking from outputs to outcomes and impact. Outcomes and impacts are of a higher order. To further explain what I mean, I'll use an analogy discussed in a report called Gifts of the Muse, which makes a distinction between instrumental effects and the intrinsic benefits. Museums are consistently focused on the instrumental effects, which are the economic benefits of museums in communities (they bring in tourists, tourists spend money, etc.) or that school children might learn specific content. How much money? How many school children learn? Museum professionals rarely discuss the intrinsic value of what they do and intrinsic quality of museums. For an art museum, which is the focus of Gifts of the Muse, an example of the intrinsic benefits could be deepening human's capacity for empathy, which could lead to stronger, sustainable communities. The report, commissioned by the Wallace Foundation, explains this far better than I ever could.
Palmquist: How do you describe the difference between indicators of success and evidence?
Korn: From my perspective, indicators of success and evidence are on a continuum. So when planning a project one could identify indicators along a continuum—say from beginning to exemplary—that suggest what you hope to achieve, what success looks like. We often use rubrics, which include indictors, to measure a project's success. Indicators also help the evaluator know what they need to measure. In the case of online professional development communities, an indicator for group behavior might be "two communication threads develop" and the evidence is the data from the social networking. The variable is the number of communication threads. As you move along the continuum, exemplary results might be "four communication threads develop." If planners want more than two communication threads to develop, they may need to do something to motivate the online community to create another communication thread. This example demonstrates the relationship between planning (intent) and evaluation (outcomes). If you want a specific outcome, you may need an action to take place. The two distinct conversation threads are the data and evidence. If you needed indicators for "knowledge" on the NSF impact framework, the variable might be the complexity of someone's communication exchange. For a beginning level indicator, it might be identifying an idea or concept; for the exemplary indicator, it might be analyzing the efficacy of an idea or concept.
To me, impact is just the language we are using today. We used to talk about goals and objectives, then it became outcome–based evaluation, and now we have risen to the level of impact. While I think "impact" is what we need to be thinking about now, who knows what will be next?
Palmquist: How might you describe the difference between evaluating professional development opportunities in person versus online professional development through web–based community development?
Korn: Well, learning that occurs online is probably more personal. The learner goes to the website with a particular agenda in mind. There's a similarity between the face–to–face professional assessment and the online assessment. It's possible that there's a greater chance for the gap to be larger in the online case because that community is more elusive—sometimes by happenstance, sometimes not. Are the right people in need aware of your website? Is there sufficient common ground shared among learners? Can you get into the head of the online learner as much as you might get inside the head of the person sitting in front of you? Right now, we are having a pretty good conversation, and I don't know what our conversation would be like if we were online or over the telephone.
Palmquist: Methodologically, do you think there would have to be some differences or there could be some similarities between designing evaluations for web based as well as in person professional development opportunities?
Korn: Methodologically it's extremely difficult to measure the effectiveness of an online project. It probably would be useful for me to hang out more with people who do the social networking so I can begin to understand how the infrastructure emerges. It might help me problem solve and think through the evaluation. I bet there are many correlations between online communities and how all those networks naturally emerge inside an organization. You know–who talks to whom. So there may not be a lot of difference. Methodologically, I think it is challenging to have valid and reliable data. That's a stumbling block for me.
Palmquist: Based on your experiences as an evaluator of informal learning experiences, what kinds of strategies would help evaluators at different points in their careers?
Korn: For the evaluator new to the field, it is critical to develop effective communication strategies. We have a communication challenge in our field; we often generate a lot of information which is compounded by the fact that our museum colleagues already have full plates and may not have the time to wade through a report, even it is nicely synthesized. In addition to helping people use evaluation as a learning tool, it is also important to realize that museum practitioners already have too much to do, and as a field, we need to find ways to deliver only the most important information and feel okay about letting go of the other information.
For example, if a client says he or she is too busy and can't meet with the evaluator, then the evaluator could take initiative and determine which information is most important to know, and suggest how the museum might use it and how it might alter its practice accordingly. A better scenario though, would be for the evaluator and museum to apply what I call the Cycle of Learning. In this scenario, everyone discusses and reflects on the evaluation findings together in the context of their practice, and everyone learns along the way. The evaluator would use inquiry, as implied by the questions in the conceptual diagram of the learning cycle, in order to facilitate the process in such a way that the client would begin to realize the most important information they need to know–where they're learning through the conversational analysis and what they can do to improve or change their practice. They're saying, "Ah–hah, I no longer need to do these programs because they have little impact, but I need to do these programs in another way so we can achieve greater impact."
Helping museums do less to achieve more is part of my whole philosophy about evaluation in a nutshell. I want museums to make a difference in people's lives. As an evaluator, it is my responsibility to help museums figure out how to make a difference in people's lives. I believe evaluation is a tool that can help me do that. New professionals need to develop communication skills and to learn what is important for the client to know. Some of this comes with experience. Evaluators have to become adept at helping museums cull through all the stuff to figure out the one, two, or three ideas or experiences that are most important in terms of achieving impact. Evaluators need to be able to write clearly and ask a lot of questions so they can help their museum colleagues achieve greater clarity about what they want to achieve. Evaluation reports need to be focused, too, and communicate a few ideas really well.
Another response to your question about how to help young people entering the field is to realize they are never done learning. There are epiphanies that we all reach in our cycles of learning and our cycles of our careers and even in our personal lives. It's important to realize that learning never ends. I think we all know that, but then we need to also change our own practice to keep up with our learning. Life goes on. There's a lot to keep up with.
Palmquist: Does that carry through across levels of practice?
Korn: I think so. The cycle of learning I mentioned: I'm on that wheel. Five years ago I would have not said anything of what I have said here. I'm on that wheel.
Palmquist: Does the web allow for this sort of transformative practice?
Korn: It has to. Otherwise, we should all go home.
Palmquist: I'm really fascinated to figure out what sort of online community could be transformative.
Korn: There's probably a whole field of study about online communities; I know very little about it myself. However, if young professionals will be using online communities to learn about and grow their practice, then it's imperative that the resources inspire them to continually transform their practice and to realize they are never done learning.
Since our conversation with Randi in May 2008, Randi has become a member of ExhibitFiles, an online community for exhibit designers and developers. And in June, she participated in an ASTC Connect conversation: "This was my first time participating in an online conversation about evaluation. I was pretty amazed how in–depth the conversations became. While I don't know the objectives of ASTC Connect, I can imagine, as an evaluator, analyzing the conversations that took place to explore how online conversation grow and change over the course of the conversation. Clearly, there is so much to explore and learn regarding online communities and how they contribute to one's professional development."
Edited by Catherine Eberbach.