The student progress measure considers the average
change in test scores from year to year and the percentage of students who made progress from one year to the next.
These include home, school and student factors that influence student learning gains and that matter more than the individual teacher in
explaining changes in test scores.
Using over 20 years of student achievement data, the researchers found that changes in the quality of the teaching staff «strongly predict
changes in test scores across consecutive cohorts of students in the same school, grade, and subject.»
When we perform a similar analysis for all teachers, we again find that changes in the quality of the teaching staff strongly
predict changes in test scores across consecutive cohorts of students in the same school, grade, and subject.
An ideal experiment to address this question would randomly assign schools to test - based accountability and then
observe changes in both test scores and long - term outcomes, comparing the results to those of a control group of schools.
We find that, prior to the introduction of ERI, teacher - experience levels were not associated
with changes in test scores, suggesting that this assumption is valid and that our results can be interpreted as the effect of ERI - induced retirements on test scores.
Results have been mixed, ranging from gains in high school graduation and college enrollment rates (e.g., Chingos and Peterson 2012), small increases in reading and math scores (e.g., Greene et al. 1998), or increases in math but not reading scores (Rouse 1998), to no
significant change in test scores (e.g., Howell and Peterson 2006; Wolf et al. 2011).
Economists have already developed a statistical method called value - added modeling that calculates how much teachers help their students learn, based on
changes in test scores from year to year.
Statisticians began the effort last year by ranking all the teachers using a statistical method known as value - added modeling, which calculates how much each teacher has helped students learn based
on changes in test scores from year to year.
If we judged the quality of schools entirely based on short
term changes in test scores, as many reformers would like to do, we'd say this school was doing a great job.
It compared year - to -
year changes in test scores and singled out grades within schools for which gains were 3 standard deviations or more from the average statewide gain on that test.
In other words, what was
the change in test scores for 4th graders from year to year at a school that had teacher turnover in that grade compared to the change in test scores between 4th graders at a school that did not have teacher turnover in that grade?
I also made my best effort in the statistical analysis to isolate
the change in test scores that could be attributed reasonably to the resources schools dedicate to teaching students.
An initial problem with their analysis is that Amrein and Berliner disregarded the magnitude of
any changes in test scores.
It's true that test scores are correlated with some measures of later life success, but for test - based accountability to work we would need to see
that changes in test scores caused by schools are associated with changes in later life success for students.
Today, it is expected that
changes in test scores will be factored into the evaluations of interventions.
Well, I've been making the argument for a while now that there is remarkably little evidence linking near - term
changes in test scores to changes in later life outcomes for students, like graduating high school, enrolling in college, completing college, and earnings.
We reanalyzed the data in a number of different ways, but were unable to find any indication that voters cast their ballots based on
changes in test scores.
And the size of
the change in test scores across these consecutive cohorts should correspond to the change in the average value added across all teachers in the grade.
We then averaged
the change in test scores for each cohort in the school on each test and subject.
This year is the third year that PACE is gathering information about how non-academic indicators correlate to
changes in test scores.
The original «No Child» law requires schools to consider
the change in test scores among different «subgroups.»