First, is it true that using prior teacher evaluation systems (which were almost if not entirely based on
teacher observational systems) yielded for «nearly every teacher satisfactory ratings»?
Although, whether in fact the systems have actually been reformed is also of debate in that states are still using many of the
same observational systems they were using prior (i.e., not the «binary checklists» exaggerated in the original as well as this report, albeit true in the case of the district of focus in this study).
Disaggregated student performance and behavior data, reading plan, strategic plan, school improvement plans, professional learning needs, assessment data, and data from the district's
educator observational system were used to design a continuous, flexible professional development program.
Researchers do note, however, that «improvements are needed» if
such observational systems are to carry the weight for which they are currently being tasked, again provided the above and their findings below:
If interested, see the Review of Article # 1 — the introduction to the special issue here; see the Review of Article # 2 — on VAMs» measurement errors, issues with retroactive revisions, and (more) problems with using standardized tests in VAMs here; see the Review of Article # 3 — on VAMs» potentials here; and see the Review of Article # 4 —
on observational systems» potentials here.
Additional benefits of
this observational system include improvements in weather forecasting, marine resource management, and maritime navigation.
The state will also provide technical support and training for one of the four
observational systems that will be selected through the competitive RFP process.
They also found that when
these observational systems were used for formative (i.e., informative, improvement) purposes, teachers» ratings were lower than when they were used for summative (i.e., final summary) purposes.
She also notes that if her system contradicts teachers» value - added scores, this too should «raise red flags» about the quality of the teacher, although she does not (in this article) pay any respect or regard for the issues not only inherent in value - added measures but also
her observational system.
Researchers, as situated in the federal context surrounding these systems (including more than $ 4 billion now released to 19 states via Race to the Top and now 43 NCLB waivers also granted), examined
the observational systems, rather than the VAMs themselves, as these observational systems typically accompany the VAM components in these (oft - high - stakes) systems.
For all DC teachers, this is
THE observational system used, and for 83 % of them these data are weighted at 75 % of their total «worth» (Dee & Wyckoff, 2013, p. 10).
In this case,
observational systems were the only real «hard data» available for the other 78 % of teachers across school sites.
But to suggest that because these observational indicators (artificially) correlate with teachers» value - added scores at «weak» and «very weak» levels (see Notes 1 and 2 below), that this means that
these observational systems might «add» more «value» to the summative sides of teacher evaluations (i.e., their predictive value) is premature, not to mention a bit absurd.
After I posted about «
Observational Systems: Correlations with Value - Added and Bias,» a blog follower, associate professor, and statistician named Laura Ring Kapitula (see also a very influential article she wrote on VAMs here) posted comments on this site that I found of interest, and I thought would also be of interest to blog followers.
It provides an operational framework for integrating and enhancing
the observational systems of participating countries and organizations into a comprehensive system focused on the requirements for climate issues.
Gavin Schmidt's expositions are just that, reflecting his belief system, modeled to be sure, just not integrated into
any observational system.
This is usually done by evaluating climate model data only where and when observations are available, in order to mimic
the observational system and avoid possible biases introduced by changing observational coverage.