All schools also must have
good value added measures for their disadvantaged pupils.
Not exact matches
Plenty of «
good» schools by other
measures, however, are only fair by a
value added measure.
In some quarters, there is doubt about
value added, so the fact that we see inequity based on other
measures as
well should convince readers that our
value -
added findings are accurate.
And then for our research, we have to both have a
good measure of
value added and ensure that when we're using that
measure, we are doing a
good job of also accounting for other things that might be going on during a child's schooling that might also affect 8th - grade tests and high school outcomes.
In our recent article for Education Next, «Choosing the Right Growth
Measure,» we laid out an argument for why we believe a proportional growth measure that levels the playing field between advantaged and disadvantaged schools (represented in the article by a two - step value - added model) is the best choice for use in state and district accountability s
Measure,» we laid out an argument for why we believe a proportional growth
measure that levels the playing field between advantaged and disadvantaged schools (represented in the article by a two - step value - added model) is the best choice for use in state and district accountability s
measure that levels the playing field between advantaged and disadvantaged schools (represented in the article by a two - step
value -
added model) is the
best choice for use in state and district accountability systems.
Measures of teachers»
value added in previous years are an even
better predictor of future gains in students» achievement than are principal ratings.
If you follow the increasing use of
Value -
Added Measures (VAMs) and Student Growth Percentiles (SGPs) in state -, district -, school -, and teacher - accountability systems, read this very
good new Mathematica working paper.
States and districts should have been focusing on the real end goal — differentiating the
best teachers from those who are merely satisfactory and those who continue to struggle — a task that would not have required complicated mathematical formulas designed to
measure each teacher's «
value -
added» to student achievement.
In addition, our analysis does not compare
value added with other
measures of teacher quality, like evaluations based on classroom observation, which might be even
better predictors of teachers» long - term impacts than VA scores.
The aim of Progress 8 is to replace the percentage of pupils gaining five
good GCSEs as the headline
measure of school accountability, instead judging schools on «
value added» through the duration of a pupils time there.
She is, in my view, rightly critical of the notion that
good teaching can be easily reduced to test scores or
value -
added measures, or that you can fire your way to improvement.
Research on how to
best measure «
value add» in all schools must be prioritised.
These and other findings with respect to the correlates of teacher effectiveness are obtained from estimations using
value -
added models that control for student characteristics as
well as school and (where appropriate teacher) fixed effects in order to
measure teacher effectiveness in reading and math for Florida students in fourth through eighth grades for eight school years, 2001 - 2002 through 2008 - 2009.
The correlation between teacher effectiveness (as demonstrated by
value -
added student growth
measures) and student life outcomes (higher salaries, advanced degrees, neighborhoods of residence, and retirement savings) is staggering; it's not an exaggeration to say that great teachers substantially improve students» future quality of life and those students» contributions to the common
good.
Testing, especially with
value -
added measures attached, functionally requires teachers to waste precious time on low - yield activities (practicing inferring; finding the main idea, etc.) that would be
better spent building knowledge across subjects.
In a briefing paper prepared for the National Academy of Education (NAE) and the American Educational Research Association, Linda Darling - Hammond and three other distinguished authors reached the following conclusion: «With respect to
value -
added measures of student achievement tied to individual teachers, current research suggests that high - stakes, individual - level decisions, as
well as comparisons across highly dissimilar schools or student populations should be avoided.»
Nonetheless, we all want our kids to have at least a few excellent teachers along the way, so it's tempting to buy into hype about
value -
added measures (VAM) as a way to separate the excellent from the horrifying, or least the
better from the worse.
Assuming them away leaves little room for the possibility that performance
measures that are not correlated with
value -
added might be transmitting crucial information about teaching quality, or that there is a disconnect between
good teaching and testing gains.
(Side note: One has to wonder whether it is reasonable to expect that any alternative
measure would ever predict
value -
added better than
value -
added itself, and what it really proves if none does.)
They conclude that evaluation methods based on
value -
added measures are unhelpful to teachers at
best and demoralizing at worst.
As described in an earlier brief, some research provides evidence that
value -
added measures — at least those that compare teachers within the same school and adjust
well for students» prior achievement — do not favor teachers who teach certain types of students.
After all, we can't decide how
best to use
value -
added measures without determining how the other
measures compare.
The VAL - ED has not yet been validated to show that the teacher survey rating is related to student achievement growth, but I'd bet it provides
better information about principal performance than either a rating by a supervisor or any currently existing
value -
added measure.
Flawed as they are,
value -
added measures appear to be
better predictors of student achievement than the teacher characteristics that we currently use for high - stakes employment and compensation decisions.
A third potential way to use
value -
added measures to improve schools is to provide teachers with incentives for
better performance.
For example, while we have ample evidence of unintended consequences of test - based accountability — as
well as evidence of some potential benefits — we know less about the consequences of using
value -
added measures to encourage educators to improve.
Changes in school organization and instruction should be made with caution and attention to effective instructional practice — not so that we can have
better value -
added measures.
For state, district, and school leaders,
value -
added measures may aid in school improvement in at least three ways: improving programs, making decisions about human resources, and developing incentives for
better performance.
When school leaders already have a
good understanding of teachers» skills,
value -
added measures may not
add much, but they may be helpful for leaders who are new to the job or less skilled at evaluating teachers through other means.
When schools understand the causes of inconsistency in
value -
added measures, they not only learn about the appropriate use of these
measures, they also learn how they can
best help their teachers improve.
The researchers find substantial overlap in
measures of a teacher's
value added across all areas, but they see substantial inconsistencies, as
well.
We have summarized the research on how
well value -
added measures hold up across years, subject areas, and student populations, but the evidence is based on a relatively small number of studies.
Accordingly, and also per the research, this is not getting much
better in that, as per the authors of this article as
well as many other scholars, (1) «the variance in
value -
added scores that can be attributed to teacher performance rarely exceeds 10 percent; (2) in many ways «gross» measurement errors that in many ways come, first, from the tests being used to calculate
value -
added; (3) the restricted ranges in teacher effectiveness scores also given these test scores and their limited stretch, and depth, and instructional insensitivity — this was also at the heart of a recent post whereas in what demonstrated that «the entire range from the 15th percentile of effectiveness to the 85th percentile of [teacher] effectiveness [using the EVAAS] cover [ed] approximately 3.5 raw score points [given the tests used to
measure value -
added];» (4) context or student, family, school, and community background effects that simply can not be controlled for, or factored out; (5) especially at the classroom / teacher level when students are not randomly assigned to classrooms (and teachers assigned to teach those classrooms)... although this will likely never happen for the sake of improving the sophistication and rigor of the
value -
added model over students» «
best interests.»
If a teacher who is very
good at teaching one topic is also very
good at teaching another — if her
value -
added measures are similar across topics — we might be comfortable using
value -
added measures based on a subset of student outcomes, for example, just math or just reading.
While
value -
added measures may be used as instruments of improvement, it is worth asking whether other
measures might be
better tools.
[4] The adequacy of the adjustments across schools in
value -
added measures, which would allow the comparison of teachers in different schools, is less clear because schools can contribute to student learning in ways apart from the contribution of individual teachers and because students sort into schools in ways that
value -
added measures may not adjust for
well.
The
well - known
Measures of Effective Teaching (MET) project funded by the Gates Foundation reports results from experiments that also address the validity of
value -
added at the middle and high school levels.
Moreover, in some places, the tests used to calculate
value -
added are
better measures of desired outcomes than they are in others.
In addition,
value -
added measures are subject to error because the students have
good or bad days when taking the test.
But
value -
added measures are being used more and more as data systems are
better able to make that student — teacher link.
Multiple years of
value -
added scores provide
better information on teacher effectiveness than does just one year, but even multiple - year
measures are not precise.
The second means through which
value -
added measures may be used for improvement is by providing information for making
better human resource decisions about individual teachers.
We still have only a limited understanding of how
best to use
value -
added measures in combination with other
measures as tools for improvement.
While research can inform the use of
value -
added measures, most decisions about how to use these
measures require personal judgment, as
well as a greater understanding of school and district factors than research can provide.
While a fair amount of evidence suggests that
value -
added measures adequately adjust for differences in the background characteristics of students in each teacher's classroom — much
better than do most other
measures —
value -
added measures are imprecise.
Districts, states, and schools can, at least in theory, generate gains in educational outcomes for students using
value -
added measures in three ways: creating information on effective programs, making
better decisions about human resources, and establishing incentives for higher performance from teachers.
[13] In other words,
value -
added, along with other
measures, [14] can help screen the performance of not only teachers, but observers as
well.
He further testified that test scores are, indeed, one way to
measure student achievement and that
value -
added models should be able to identify
good and bad teachers.
Value -
added approaches hold great promise, but there is a need to develop
better tests (and other thoughtful
measures of student learning) and
better measures of teacher practice to use along with test scores, so they are not the sole factor used to evaluate teacher effectiveness.
The perfect evaluation system doesn't exist yet, but we do have access to
measures of teacher performance that are far
better than seniority: teacher ratings, classroom management, teacher attendance, specific licensure, peer or principal review,
value -
added student data.