Measurement error in VA measures of teaching effectiveness: The role of student effort

Sunday, October 11, 2015: 11:15 AM
Jeffrey A. Livingston, Ph.D. , Economics, Bentley University, Waltham, MA
Several recent studies studies provide strong evidence that teacher quality, as measured by their "value added" (VA) impact on student test scores, is strongly correlated with student outcomes later in life. These studies have helped to hasten a growing tide which supports the implementation of policies that base decisions to hire, fire and compensate teachers at least partially on standardized test results. We design an experiment that explores one reason why VA might be an unreliable measure of a of teacher effectiveness. Students take two standardized tests at the same time. One is the official standardized test administered by the state (Illinois), while the other is a "probe" which we designed that measures the same set of knowledge and skills that the official test measures. We incentivize improvement through financial incentives on the probe, but not on the official test. The results provide strong evidence that students showed improvement on the incentivized test but not on the non-incentivized test, despite the fact that the two tests measure similar things and were taken at approximately the same time. We interpret this finding as evidence that in the absence of financial incentives, students may not show what they actually know on tests in which they have no personal stake, making such tests an unreliable measure of what knowledge and skills their teachers have provided them. Indeed, in our small sample, we show that the value added measure of the tutors who were part of our experiment (who take the role of teachers) is different for a given teacher depending on whether the incentivizezd or non-incentivized test is used.