These publications disseminate academic research in peer or editorially referred journals.
American Journal of Evaluation
Each dynamic issue of the American Journal of Evaluation (AJE) explores the complex and difficult challenges related to conducting evaluations. From choosing program theories to implementing an evaluation to presenting the final report to managing an evaluation's consequences, AJE offers original, peer-reviewed, often highly cited articles about the methods, theory, and practice of evaluation.
Evaluation and Program Planning
Evaluation and Program Planning is based on the principle that the techniques and methods of evaluation and planning transcend the boundaries of specific fields and that relevant contributions to these areas come from people representing many different positions, intellectual traditions, and interests. In order to further the development of evaluation and planning, they publish articles from the private and public sectors in a wide range of areas: organizational development and behavior, training, planning, human resource development, health and mental, social services, mental retardation, corrections, substance abuse, and education. The primary goals of the journal are to assist evaluators and planners to improve the practice of their professions, to develop their skills, and to improve their knowledge base.
Evaluation & Research in Education
Evaluation & Research in Education aims to make methods and contents of evaluation and research in education available to teachers, administrators, and research workers. Papers published in the quarterly journal: report evaluation and research findings; treat conceptual and methodological issues; and/or consider the implications of the above for action. There is an extensive book reviews section and also occasional reports on educational materials and equipment.
For nearly three decades, Evaluation Review has served as an interdisciplinary forum for researchers, planners, and policymakers who develop, implement, and utilize studies designed to improve the human condition.
Evaluation: The International Journal of Theory, Research and Practice
Over the last two decades, evaluation has become a major issue for academics, governmental and public organizations, and businesses throughout the world. This has, however, resulted in a body of knowledge scattered across disciplines, professions, and countries. To promote dialogue internationally and to build bridges within this expanding field, Evaluation: The International Journal of Theory, Research and Practice was launched in July 1995.
Journal of MultiDisciplinary Evaluation
The mission of this journal is the news and thinking of the profession and discipline of evaluation in the world, for the world. It is a peer-reviewed journal published in association with The Interdisciplinary Doctoral Program in Evaluation, The Evaluation Center, Western Michigan University.
New Directions for Evaluation
New Directions for Evaluation, a quarterly thematic journal, is an official publication of the American Evaluation Association. The journal publishes empirical, methodological, and theoretical works on all aspects of evaluation. Although the journal addresses a wide range of substantive areas such as government performance, tax policy, energy, environment, mental health, education, job training, and public health, the focus is on evaluation theory, practice, and research within these areas. Topics such as product evaluation, personnel evaluation, policy analysis, and technology assessment are also often included.
These non-peer-reviewed publications feature articles of interest and may include advertising.
The Evaluation Exchange
Harvard Family Research Project's evaluation periodical, The Evaluation Exchange, addresses current issues facing program evaluators of all levels, with articles written by prominent evaluators in the field. Designed as an ongoing discussion among evaluators, program practitioners, funders, and policymakers, The Evaluation Exchange highlights innovative methods and approaches to evaluation, emerging trends in evaluation practice, and practical applications of evaluation theory.
These non–peer–reviewed publications are often published by professional associations and feature industry news and articles of interest to practitioners.
Online Discussion Groups and Listservs
These are topical mailing lists, accessible either through email or website archives.
American Evaluation Association Discussion Lists/Listservs
Over 2000 evaluators subscribe to AEA's own listserv, EVALTALK, and other email-based discussion lists focusing on evaluation and/or evaluation-related methodologies.
Evaluation Cafe of The Evaluation Center, Western Michigan University
The Evaluation Center's mission is to advance the theory, practice, and utilization of evaluation. The Evaluation Cafe presents discussions, debates, and presentations about evaluation.
Blogs and Wikis
Blogs are websites that contain posts about topical commentary and news while wikis are websites in which the community can contribute material to a body of topical knowledge.
At this Evaluation Portal, there is hand–picked, human–edited, categorized information about the topic "evaluation" (and a bit about social science methods) Lars Balzer found to be useful during his work in this field.
EvaluationWiki was founded in September of 2006 by the non–profit organization Evaluation Resource Institute (ERI). The mission of EvaluationWiki is to make freely available a compendium of up–to–date information and resources to everyone involved in or interested in the science and practice of evaluation. This compendium will be a continually growing and evolving representation of evaluation knowledge.
Evaluation Database Resources
Find links to additional evaluation databases and resources. These sites provide abstracts, evaluation reports, and guidelines for conducting evaluations.
Assessment Tools in Informal Science
This is a searchable website of assessment tools for informal science learning. The goal is to provide practitioners, evaluators, researchers and policy makers with the information to choose appropriate tools for assessing program quality and outcomes for children and youth. Supported by the Noyce Foundation, PEAR (Program in Education, Afterschool and Resiliency) located at Mclean Hospital and Harvard Medical School, reviewed existing tools and published the findings in a report. This website is based on the findings of that report and will be continuously updated in collaboration with the Youth Development Researchers at 4-H.
Association of Science and Technology Centers
The Association of Science and Technology Centers provides abstracts of front-end evaluation studies for science exhibitions. Each summary includes brief descriptions of the study's purpose, methods, and major findings.
The British Interactive Group
British Interactive Group (BIG) is an organization for individuals involved in all aspects of hands-on exhibitions and activities. BIG presents reports that summarize the findings of what they have learned from various museum evaluation studies and research.
Center for Inquiry in Science Teaching and Learning
The St. Louis Center for Inquiry in Science Teaching and Learning (CISTL) supports inquiry-based teaching in K-12 science education through professional development and through research into science learning and teaching. CISTL is creating an annotated bibiliography where visitors can search and download entries.
Exploratorium Evaluation Studies
The Exploratorium's Visitor Research and Evaluation Department conducts research on informal learning in the public space of our museum. Examples of front end, formative, and evaluation studies conducted at the Exploratorium (San Francisco, CA) are available for review and download.
Harvard Family Research Project
This database provides information about evaluation work from out-of-school time programs and initiatives. Each profile includes a program overview, detailed information about each evaluation report, and where possible, electronic links to actual evaluation reports and contacts for program directors and evaluators.
Online Evaluation Resource Library
The Online Evaluation Resource Library is a National Science Foundation funded project developed for professional evaluators and program developers. Although targeted for those who work in school environments, it provides extensive evaluation resources and samples — instruments, plans, and reports — that can be modeled, adapted, or used as is.
Oregon Musuem of Science and Industry (OMSI) Evaluation & Visitor Studies Site
OMSI has provided this web-based resource to share what we have learned throughout the process of evaluating exhibits and programs. It is intended principally for museum staff and science educators, but may be of use to others interested in informal education. At this site, you can learn more about each of the five stages in their evaluation process and view sample reports from evaluations conducted at OMSI.
These are membership groups, usually non–profit, which exist to promote and professionalize a field or discipline. Many have conferences associated with them.
American Educational Research Association
The American Educational Research Association (AERA), founded in 1916, is concerned with improving the educational process by encouraging scholarly inquiry related to education and evaluation and by promoting the dissemination and practical application of research results. AERA is an international professional organization, with the primary goal of advancing educational research and its practical application.
American Evaluation Association
The American Evaluation Association is an international professional association of evaluators devoted to the application and exploration of program evaluation, personnel evaluation, technology, and many other forms of evaluation. Evaluation involves assessing the strengths and weaknesses of programs, policies, personnel, products, and organizations to improve their effectiveness.
Committee on Audience Research and Evaluation (CARE), American Association of Museums
CARE is a Standing Professional Committee of the American Association of Museums (AAM), and is made up of audience researchers and evaluators who work in museums, independent professionals, and others interested in audience research in museums and other cultural institutions. Their focus is on visitor studies, a term commonly used in the museum field to describe the process of systematically obtaining knowledge from and about museum visitors, actual and potential, for the purpose of increasing and utilizing such knowledge in the planning and execution of those activities that relate to the public.
Evaluation & Visitor Research Special Interest Group (EVRSIG)
The EVRSIG is a special interest group of Museums Australia. The Evaluation and Visitor Research Special Interest Group (EVRSIG), established in 1996, is a group of museum professionals dedicated to advocating for the visitor voice within museum practice.
Visitor Studies Association (VSA)
VSA is today's premier professional organization focusing on all facets of the visitor experience in museums, zoos, nature centers, visitor centers, historic sites, parks and other informal learning settings.
These are organizations which conduct and disseminate research but which are not open for membership.
The Evaluation Center at the University of West Georgia
The Evaluation Center at the University of West Georgia seeks to serve a wide range of clients who are focused on the development of human capital. The Evaluation Center is designed to provide objective evaluation of policies and programs associated with human development across diverse fields and conducted for clients at the local, state, national, and international levels.
The Evaluation Center at Western Michigan University
The Evaluation Center's mission is to advance the theory, practice, and utilization of evaluation. The Center's principal activities are research, development, dissemination, service, instruction, and national and international leadership in evaluation.
Harvard Family Research Project
Since 1983, we have helped stakeholders develop and evaluate strategies to promote the well being of children, youth, families, and their communities. We work primarily within three areas that support children’s learning and development — early childhood education, out-of-school time programming, and family and community support in education. Underpinning all of our work is a commitment to evaluation for strategic decision making, learning, and accountability. Building on our knowledge that schools cannot do it alone, we also focus national attention on complementary learning. Complementary learning is the idea that a systemic approach, which integrates school and nonschool supports, can better ensure that all children have the skills they need to succeed.
SRI / Center for Technology Learning
Innovations in teaching and learning call for innovations in evaluation. CTL's interdisciplinary evaluation teams have a reputation for applying cutting-edge approaches to studying both early-stage and mature reform initiatives in schools and community settings, providing clients data that clients can use to improve their program designs and analyze implementation and impact. CTL's researchers are experienced in conducting mixed-method evaluation studies and in using diverse study designs, including quasi-experimental and experimental outcome studies.
NSF Evaluation Guides
The National Science Foundation publishes three useful handbooks about evaluation that are relevant to informal learning studies. Each is available for free as a PDF download.
Framework for Evaluating Impacts of Informal Science Education Projects (2008)
Drawing from a March 12-|13, 2007 NSF funded workshop about informal science education evaluation for "Evaluation Activities Related to the Academic Competitiveness Council's Examination of STEM Education Programs," this handbook offers background about NSF's evolving reporting requirements and advice from evaluators working in the field about how to gather evidence of project impacts.
The User–Friendly Handbook for Project Evaluation (2010)
This Handbook was developed to provide managers working with the National Science Foundation (NSF) with a basic guide for the evaluation of NSF–fs educational programs. It features types of evaluation, steps in doing an evaluation, quantitative and qualitative methods, and culturally–responsive strategies. It is aimed at people who need to learn more about both what evaluation can do and how to do an evaluation, rather than those who already have a solid base of experience in the field. It builds on established principles, blending technical knowledge and common sense to meet the special needs of NSF and its stakeholders.
User-Friendly Handbook for Mixed Methods Evaluations (1997)
Although there are many textbooks, manuals, and guides dealing with evaluation, few are geared to the needs of the Directorate for Education and Human Resources (EHR) grantee who may be an experienced researcher but a novice evaluator. One of the ways that EHR seeks to fill this gap is by the publication of what have been called "user–friendly" handbooks for project evaluation. This handbook is intended to provide the knowledge needed for planning and managing useful evaluations. Its specific intent is to provide information on qualitative techniques and discuss how they can be combined effectively with quantitative measures.
These are manuals for do–it–yourself evaluations.
The Corporation for Public Broadcasting
This website is a guide to what evaluators need to know, what others have done, and how to maximize the educational impact of a project.
EvaluationWiki was founded in September of 2006 by the non-profit organization Evaluation Resource Institute (ERI). The mission of EvaluationWiki is to make freely available a compendium of up-to-date information and resources to everyone involved in or interested in the science and practice of evaluation. This compendium will be a continually growing and evolving representation of evaluation knowledge.
Florida Atlantic University Nonprofit Resource Center
Program evaluation is an essential task for non–profit organizations. It is a tool an organization can use to ensure that the programs it is currently running are serving the mission of an organization and are achieving maximized results. Program evaluation can alert an organization to trouble areas before they become unmanageable. It can also assist in determining how best to allocate resources.
IBEC Outcomes Toolkit 2.0
The Outcomes Toolkit 2.0 is a four step process of conducting outcome–based evaluation. During the past 2 years they worked with several libraries around the country to develop and test the process and the evaluation instruments you will find in the toolkit.
Institute of Museum and Library Services
This tutorial is designed for museums, libraries, and related organizations that are applying for National Leadership Grants (NLG). The purpose is to provide skills, knowledge, and tools to develop a good project plan.
Library Services and Technology Act Toolkit
The toolkit is designed to provide: an Outcomes Plan "wizard" and data report forms for Mid-Year and Annual Reports; point–of–need instruction in outcome–based evaluation; instruction on data collection; tools for data analysis; guidance in reporting project progress; and strategies for reporting project successes and results.
Museums, Libraries and Archives Council
Inspiring Learning for All describes what an accessible and inclusive museum, archive, or library which stimulates and supports learning looks like.
Research Councils UK
This new RCUK evaluation guide is aimed at anyone who wants to talk with the public about issues around science and research.
Outcomes–based planning and evaluation (OBPE) has emerged as a best practice in museum and library services. Since 1998, the Institute of Museum and Library Services (IMLS) has offered a two–day, face–to–face OBPE workshop. A cooperative project between IMLS and Indiana University Purdue University Indianapolis (IUPUI), Shaping Outcomes provides a new convenient alternative to this workshop — an online, instructor–mediated course on OBPE for library and museum personnel as well as students in the museum and library fields.
W. K. Kellogg Foundation
This handbook provides a framework for thinking about evaluation as a relevant and useful program tool. It was written primarily for project directors who have direct responsibility for the ongoing evaluation of W.K. Kellogg Foundation–funded projects.
Work Group for Community Health and Development at the University of Kansas
The Community Tool Box is a large resource for free information on essential skills for building healthy communities. It offers over 7,000 pages of practical guidance in creating change and improvement.
These are graduate degree programs in evaluation.
Don't see a resource you would like included? Contact us.
- EVALUATION PUBLICATIONS
- ONLINE EVALUATION SOURCES
- EVALUATION ORGANIZATIONS
- NSF EVALUATION GUIDES
- EVALUATION HOW TO'S
- ACADEMIC EVALUATION PROGRAMS