Carnegie Knowledge Network

What We Know: Value-Added Methods and Applications
Carnegie Knowledge Network Concluding Recommendations

By Dan Goldhaber, Douglas N. Harris, Susanna Loeb, Daniel F. McCaffrey, and Stephen W. Raudenbush
A synthesis of the key takeaways from the Carnegie Knowledge Network and recommendations from the Carnegie Panelists.

What Do We Know About the Long-term Impacts of Teacher Value-Added?

By Stephen W. Raudenbush
In what ways does a student benefit from having a teacher with high value-added?

Is Value-Added Accurate for Teachers of Students with Disabilities?

By Daniel F. McCaffrey
What makes estimating value-added for teachers of students with disabilities challenging?

How Can Value-Added Measures Be Used for Teacher Improvement?

By Susanna Loeb
What are the mechanisms through which value-added can lead to improved student outcomes?

What Do Value-Added Measures of Teacher Preparation Programs Tell Us?

By Dan Goldhaber
How might value-added measures be useful to assess the performance of teacher prep programs?

What Do We Know About the Tradeoffs Associated with Teacher Misclassification in High Stakes Personnel Decisions?


by Dan Goldhaber and Susanna Loeb

Evaluators have to rely on inherently imperfect measures to rate teachers. As a result, evaluating teachers to group them into performance categories will inevitably lead to errors. Errors result in “false positives” and “false negative” classifications, which have important implications for students and teachers.

read more »

How Do Value-Added Indicators Compare to Other Measures of Teacher Effectiveness?

measuring tape

by Douglas N. Harris

In the recent drive to revamp teacher evaluation and accountability, teacher value-added measures have unquestionably played the starring role. But the star of the show is not always the best actor, nor can the star succeed without a strong supporting cast. In assessing teacher performance, observations of classroom practice, portfolios of teachers’ work, student learning objectives, and surveys of students are all possible additions. In this paper, I will explain how these various measures stack up on two essential criteria: validity and reliability.

read more »

Do Different Value-Added Models Tell Us the Same Things?

keyboard and graphs

by Dan Goldhaber and Roddy Theobald

Given the modeling and vendor options at their disposal, school districts and states likely have a number of pressing questions about which model is “right” for their specific situation. This entry explores the amount of difference choice of model makes. How would the same teacher rank under different modeling approaches? And, in particular, in what effectiveness category would the same teacher fall under different modeling approaches?

read more »

How Stable are Value-Added Estimates across Years, Subjects, and Student Groups?

color pencils

by Susanna Loeb and Christopher A. Candelaria

Value-added measures are being used to assess teacher effectiveness, but how can we make sense of the inconsistency in value-added measures for the same teacher across time, subject and student population? Some of the inconsistency in a teacher’s value-added measures is driven by true differences in that same teacher’s performance. That is, a teacher may simply perform better in one year than he or she does in another year. Another part of the difference comes from the inaccuracy of the value-added measure. Understanding the variation in value-added measures for the same teacher across time, subjects and students can help education leaders make the best use of the information that value-added measures provide on teacher performance to inform the decisions they make.

read more »

Do Value-Added Methods Level the Playing Field for Teachers?

football field

by Daniel F. McCaffrey

Value-added measures have caught the interest of policymakers because, unlike many of the uses of test scores in current accountability systems, it purports to “level the playing field” so that value-added measures of teachers’ effectiveness do not depend on characteristics of the students. Yet many stakeholders are concerned that value-added methodology does not live up to its billing and that teacher effects from value added measures will be sensitive to which students a teacher teaches. For instance, do teachers teaching low-income, minority, English language learners or special education students consistently have lower value added than equally effective teachers teaching other students? This entry discusses what is and is not known about how well value added levels the playing field for evaluating teachers by controlling for student characteristics.

read more »

How Should Educators Interpret Value-Added Scores?


by Stephen W. Raudenbush and Marshall Jean

A teacher’s value-added score is intended to convey how much that teacher has contributed to student learning in a particular subject in a particular year. The different potential uses of value-added measures can be controversial. Some doubt the validity of the tests themselves, some question the idea that student learning gains reflect teacher effectiveness, and some question the emphasis on test scores in shaping teachers’ goals. The purpose of this entry is not to settle these controversies but rather to answer a more limited question: How might educators reasonably interpret value added scores? Two key principles emerge: Understanding the sources of uncertainty, and quantifying the extent of uncertainty in value added scores is essential to making sensible interpretations of those scores.

read more »

A Focus on Value-Added Measures


by Anthony Bryk

Many teachers are starting to embrace the idea of accountability but are opposed to being judged by value-added. Computing value-added well is indeed a challenging task.

read more »

The CKN Difference

The Carnegie Knowledge Network seeks to provide education policymakers and practitioners with timely, authoritative research and information on the use of value-added methodologies and other metrics in teacher evaluation systems. Read more »

Join Our Mailing List

Funded through a cooperative agreement with the Institute for Education Sciences. The opinions expressed are those of the authors and do not represent views of the Institute or the U.S. Department of Education.