We can think of value - added estimates as measuring three components: (1)
true teaching effectiveness that persists across years; (2) true effectiveness that varies from year to year; and (3) measurement error.
Not exact matches
It turns out the same is
true for measures of
teaching effectiveness: each has its strengths and weaknesses, and by combining them you can capitalize on the former and minimize the latter.
This is especially important when observational (and value - added) data are to be used for high - stakes accountability systems in that the data yielded via really both measurement systems may be less likely to reflect «
true»
teaching effectiveness due to «
true» bias.
I dislike them for their lack of
effectiveness in
teaching the child a lesson (the
true reason they're being used in the first place) and their
effectiveness in making the child feel as if having an opinion and making mistakes are unacceptable.