This paper examines the attrition and mobility of early - career teachers of varying quality using value -
added measures of teacher performance.
Using administrative longitudinal data from five states, we study how value -
added measures of teacher performance are affected by changes in state standards and assessments.
Using administrative longitudinal data from five states, we study how value -
added measures of teacher performance are affected...
As a part of this work, Carnegie enlisted a panel of distinguished scholars to address key questions and challenges that the developers of teacher evaluation systems face in designing and implementing systems that use value -
added measures of teacher performance.
Recent work has included several studies related to value -
added measures of teacher performance, teacher effectiveness in the early grades, school choice, teacher mobility and special needs identification.
Not exact matches
Opting out
adds noise to the data, which increases the amount
of variability in the
teacher performance measures because each
teacher's score is based on fewer students.
Granted, there are mechanisms in place to evaluate
teacher performance, but many
of these value -
added measures feel more like punch lists than professional reviews.
The authors address three criticisms
of value -
added (VA)
measures of teacher effectiveness that Stanford University education professor Linda Darling - Hammond and her colleagues present in a recent article: that VA estimates are inconsistent because they fluctuate over time; that
teachers» value -
added performance is skewed by student assignment, which is non-random; and that value -
added ratings can't disentangle the many influences on student progress.
For the subset
of teachers who can be linked to students, we are able to estimate value -
added measures of classroom
performance for each
teacher in each year.
The advent
of more student testing, especially the spread
of value -
added measures of pupil and school
performance, has given us both the technical ability to evaluate
teachers by the results they produce and the moral imperative to do so.
Those who want to reward
teachers on the basis
of measured performance should consider whether it is worth the trouble and expense to implement value -
added assessment if the only outcome is to reward small numbers
of teachers.
This component makes up 50 and 75 percent
of the overall evaluation scores in the districts we studied, and much less is known about observation - based
measures of teacher performance than about value -
added measures based on test scores.
When they insist that ideas like school choice,
performance pay, and
teacher evaluations based on value -
added measures will themselves boost student achievement, would - be reformers stifle creativity, encourage their allies to lock elbows and march forward rather than engage in useful debate and reflection, turn every reform proposal into an us - against - them steel - cage match, and push researchers into the awkward position
of studying whether reforms «work» rather than when, why, and how they make it easier to improve schooling.
They use a multitude
of measures —
performance - based assessment, growth models, or value -
added models — to assess
teacher practice.
In February 2012, the New York Times took the unusual step
of publishing
performance ratings for nearly 18,000 New York City
teachers based on their students» test - score gains, commonly called value -
added (VA)
measures.
· Base
teacher evaluations on multiple
measures of performance including «value -
added» data on student academic progress.
And beyond the school and district accountability provisions spawned by No Child Left Behind and its kin, many states have upped the ante to incorporate
teachers» contributions to their students» test
performance into
teacher evaluation systems, and these value -
added measures require testing large numbers
of students.
Because value -
added measures were so reliable at predicting
teachers»
performance, the researchers urged school districts to use it as a «benchmark» for studying the effect
of other
measures.
As examples, studies that use student test
performance to
measure teachers» effectiveness — adjusted for prior achievement and background characteristics — demonstrate that, on average,
teachers add more to their students» learning during their second year
of teaching than they do in their first year, and more in their third year than in their second.
Other differences come from the tests on which the value -
added measures are based; because test scores are not perfectly accurate
measures of student knowledge, it follows that they are not perfectly accurate gauges
of teacher performance.
In Florida, the District
of Columbia, and a growing number
of states and districts,
performance measures, including value -
added estimates, play a key role in placing
teachers in
performance categories.
Because value -
added measures adjust for the characteristics
of students in a given classroom, they are less biased
measures of teacher performance than are unadjusted test score
measures, and they may be less biased even than some observational
measures.
Because value -
added measures were so reliable at predicting
teachers» future
performance, the researchers urged school districts to use it as a «benchmark» for studying the effect
of other
measures.
The authors
add that] to ensure that evaluation ratings better reflect
teacher performance, states should [more specifically] track the results
of each evaluation
measure to pinpoint where misalignment between components, such as between student learning and observation
measures, exists.
The most common way to use multiple
measures in
teacher accountability is through weighted averages
of value -
added with other gauges
of teacher performance.
Accordingly, and also per the research, this is not getting much better in that, as per the authors
of this article as well as many other scholars, (1) «the variance in value -
added scores that can be attributed to
teacher performance rarely exceeds 10 percent; (2) in many ways «gross» measurement errors that in many ways come, first, from the tests being used to calculate value -
added; (3) the restricted ranges in
teacher effectiveness scores also given these test scores and their limited stretch, and depth, and instructional insensitivity — this was also at the heart
of a recent post whereas in what demonstrated that «the entire range from the 15th percentile
of effectiveness to the 85th percentile
of [
teacher] effectiveness [using the EVAAS] cover [ed] approximately 3.5 raw score points [given the tests used to
measure value -
added];» (4) context or student, family, school, and community background effects that simply can not be controlled for, or factored out; (5) especially at the classroom /
teacher level when students are not randomly assigned to classrooms (and
teachers assigned to teach those classrooms)... although this will likely never happen for the sake
of improving the sophistication and rigor
of the value -
added model over students» «best interests.»
The preliminary report focuses on two
measures of teacher performance: value -
added analysis and student surveys.
The largest study
of performance incentives based on value -
added measures comes from a Nashville, Tennessee study that randomly assigned middle - school math
teachers (who volunteered for the study) to be eligible for
performance - based pay or not.
By Dan Goldhaber How might value -
added measures be useful to assess the
performance of teacher prep programs?
Value -
added measures are imperfect, but they are one among many imperfect
measures of teacher performance that can inform decisions by
teachers, schools, districts, and, states.
[13] In other words, value -
added, along with other
measures, [14] can help screen the
performance of not only
teachers, but observers as well.
Whatever the future uses
of value -
added measures, the idea
of holding
teachers accountable for student
performance seems here to stay.
Initial findings from the Bill & Melinda Gates Foundation's
Measure of Effective Teaching (MET) study indicate that
teachers» value -
added histories — composite
measures based on student test scores and
teachers» perceived ability to present challenging material — are strong indicators
of future classroom
performance.
The Commission
of Higher Education is working to: 1) improve the quality
of teacher preparation and
performance; 2) open the level
of dialogue among superintendents and principals and higher education
teacher preparation programs; 3) expand communication among vertical teams in P - 16 to support students entering post-secondary education; and 4) review and
measure learning outcomes at all levels, including higher education and demonstrate significant value -
added for post-secondary options.
Using multiple
measures of teacher performance is one safeguard against the shortcomings
of value -
added analysis.
The perfect evaluation system doesn't exist yet, but we do have access to
measures of teacher performance that are far better than seniority:
teacher ratings, classroom management,
teacher attendance, specific licensure, peer or principal review, value -
added student data.
What we should do instead is expand upon the accountability
measures set in place a decade ago under No Child — and provide families with the data they need (including, contrary to the assertions
of our friend, Andy Rotherham, value -
added data on
teacher performance) so they can make smart choices and spur systemic reform.
The preliminary report released Friday focuses on two
measures of teacher performance: value -
added analysis and student surveys.
Can value -
added measures of teacher education
performance be trusted?
[4] As the use
of value -
added models now allow for the development
of a more meaningful understanding
of teacher effectiveness, districts should ensure that
performance pay systems consider both qualitative and quantitative
measures in order to fairly assess and compensate
teachers for their
performance.
CAP's report notes that the discussion
of publishing
teachers» names along with their value -
added score (a
measure of a
teacher's efficacy, relative to other
teachers in the group, in promoting student achievement) began when the Los Angeles Times published a report featuring the
performance ratings for Los Angeles Unified School District
teachers.
I have reviewed the next
of nine articles (# 3
of 9) here, titled «Exploring the Potential
of Value -
Added Performance Measures to Affect the Quality
of the
Teacher Workforce» as authored by Dan Goldhaber — Professor at the University
of Washington Bothell, Director
of the National Center for Analysis
of Longitudinal Data in Education Research (CALDER), and a Vice-President at the American Institutes
of Research (AIR).
Exploring the potential
of value -
added performance measures to affect the quality
of the
teacher workforce.
«On the other hand, the use
of value -
added performance measures might lead to positive changes in the perception
of teachers, making teaching a more prestigious profession and hence leading more people to pursue a teaching career» (p. 89).
But all
of them share the idea that
teachers who are particularly successful will help their students make large learning gains, that these gains can be
measured by students»
performance on achievement tests, and that the value -
added score isolates the
teacher's contribution to these gains.
In this paper we examine the mobility
of early - career
teachers of varying quality,
measured using value -
added estimates
of teacher performance.
This research brief considers the stability
of value -
added measures of teacher effectiveness over time and the resulting implications for the design and implementation
of performance - based
teacher compensation schemes.
The reports published in The Times used a so - called value -
added analysis to
measure the
performance of elementary school
teachers.
If you are to give validity to the value -
added approach to
measuring a
teacher's
performance, the prerequisite is that the standardized test is a valid
measure of a student's learning and knowledge, and that in itself is controversial.
Most studies that have fueled alarm over the attrition and mobility rates
of high - quality
teachers have relied on proxy indicators
of teacher quality, which recent research finds to be only weakly correlated with value -
added measures of teachers»
performance.