Given this weak statistical evidence of positive relationships between student achievement and district or school data use (as reflected in the principal and
teacher survey items), we turned to our qualitative data, which provided the following insights:
Not exact matches
On the basis of these
survey results, we created three measures: (1) the principal's overall assessment of the
teacher's effectiveness, which is a single
item from the
survey; (2) the
teacher's ability to improve student academic performance, which is a simple average of the organization, classroom management, reading achievement, and math achievement
survey items; and (3) the
teacher's ability to increase student satisfaction, which is a simple average of the role model and student satisfaction
survey items.
The Department of Education and Training Queensland School Opinion
Survey for Staff includes a group of
items related to building
teacher capability.
This 36 - point gap in support between
teachers and the public is the largest observed for any
item on our
survey.
One
survey item asked respondents to rate, on a scale of 1 (not important) to 5 (very important), the importance of various actors in informing their position on
teacher evaluation policy.
For all
items in the
survey, the responses of the public, parents,
teachers, African Americans and Hispanic adults are posted at educationnext.org / edfacts
Quantitative data for this sub-study derived from responses to 17
items from the
teacher survey.
After revisions and more discussions with
teachers and principals, we were ready with a Round One
teacher survey of 117
items and a principal
survey of 149
items.
Evidence for this sub-study was provided by responses to 58
items on the first round of
teacher surveys and 58
items from the first round of principal
surveys.
The
teacher survey administered to all participating schools during the first round of data collection included a set of
items designed to measure the relative influence of those in multiple roles on school decision making (see Section 1.1).
Both the
teacher surveys and both the principal
surveys contained some
items from established instruments with good reliability measures as well as many new
items and scales.
We divided the Round One
teacher survey into sections with
items about:
For Round Two, we collaboratively developed a revised 131 -
item teacher survey and a 105 -
item principal
survey.
We measured three additional variables with the
teacher survey: school leadership (20
items), class conditions (15
items), and school conditions (21
items).
The seven
survey items loading on Factor 2 measure the frequency with which specific actions with a direct focus on instructional improvement were enacted by the principal with individual
teachers.
Despite the fact that 95 percent of those
surveyed in the August 2015 Phi Delta Kappa / Gallup Poll on Public Education said that the most important way to improve schools is to improve the quality of
teachers, professional development is often the first budget
item to be cut or reduced.
As Appendix A explains in considerably more detail, our instrument for the second
survey of
teachers includes 131
items.
Quantitative data included
items from the second
teacher survey and student performance data on state - level achievement tests.
For this evidence we examined responses to 36 of the 104
items included in the first
teacher survey.
The
survey asked
teachers to rate
items such as students» attention levels, the
teacher's enjoyment, and effectiveness of the lesson.
Second,
teachers completed a 6 -
item post-professional development
survey immediately after the one - day professional development workshop (see Additional file 1: Table S2).
Ideally, we're talking maybe over the course of, and there are
survey programs through the US Department of Education that do have these types of panel
surveys where they go back periodically and interview whether it's parents,
teachers, school principals, and students and just see how they respond differently to similar
items over a very long period of time.
These scores can be used to compare
teachers who administered the same
survey items at the same grade and content level.
Teachers can use common content pre-tests and post-tests as well as common
items on student
surveys to produce scores of student learning that more accurately reflect their effectiveness in the classroom.
To develop average growth scores using
surveys,
teachers can use
items that address student perceptions of how much they've learned and how hard they've worked.
The
survey included five questions and an
item that allowed student
teachers to write additional comments.
Items were pulled from this survey and combined with original items about whether or not teachers think students have preconceptions about math, qualities of those misconceptions, and how teachers work with student preconceptions to teach effecti
Items were pulled from this
survey and combined with original
items about whether or not teachers think students have preconceptions about math, qualities of those misconceptions, and how teachers work with student preconceptions to teach effecti
items about whether or not
teachers think students have preconceptions about math, qualities of those misconceptions, and how
teachers work with student preconceptions to teach effectively.
In 2012, 17 preservice
teachers completed the
survey, which included 24 original
items used to assess their beliefs about the role of students» thinking in effective mathematics instruction.
The
survey items were initially developed by members of the simulation committee and pretested with the first group of preservice
teachers who completed the course.
The next row displays the question stem for the subsequent
survey items: «To what extent did the educator preparation program begin the beginning
teacher to:»
To address this issue, the preservice
teachers were assured anonymity and were reminded that their responses to the
survey items were not reflected in the pass / fail grade for the course.
The assessment section of the CIERA
survey included
items arranged in four matrices, to maximize the amount of information obtained from
teachers.
Several STEM - relevant variables show a significant association with effectiveness in math and science, including STEM
teacher turnover, calculus and early algebra participation, and math and science instructional indices created from
survey items in the data.
The
survey, which required about 20 minutes to complete, measured the collective leadership and
teacher - performance antecedents described in our framework: 9
items measured collective leadership, 9
items measured
teacher capacity, 17
items measured
teacher motivation, and 14
items measured
teacher work settings or conditions.
Several
survey items focused on how the preservice
teachers came to a decision regarding the issue presented to them.
Teacher responses to 49
items from a 104 -
item survey provided the remaining data for this sub study.
Our overall measure of school trust, on the basis of approximately two dozen
survey items addressing
teachers» attitudes toward their colleagues, principals, and parents, proved a powerful discriminator between improving and nonimproving schools.
Completion or noncompletion of the data - collection instruments did not impact the preservice
teachers» grades, and they were informed that their responses to the
survey items would be anonymous and used to make improvements to the simulation.
In June 2008, a revised version of the MDI, containing eight demographic background
items and 71
survey items, was read out loud in class, by classroom
teachers to 80 fourth graders from seven classrooms in three elementary schools.