Many critics, and rightly so, decry the misuse of the generated data to rank school performance or teacher effectiveness. This is a bit of a red herring and tends to reflect issues with standardized testing that are more prevalent in the United States than anywhere in Canada. These kind of concerns can have an impact on schools and the way that they prep and administer the tests. It can put additional stress on teachers and on kids as well. And finally, all of this angst often trickles down to parents who are often "nudged" by their child's school to have their child opt out of writing thus skewing the data and lessening its value.
Having said all of that, the Ministry has continued to recognize the value of the testing and has been working to update both the test items and the interactivity of the testing process in order to make it a more comfortable and engaging experience for students. These are great moves to reduce test anxiety and to get a more accurate assessment of student levels of achievement.
Unfortunately, in the rush to assuage the concerns of teachers over the use of the tests, the Ministry made one change that will profoundly effect their value. As part of the updating of the process, the testing dates have been moved from mid-year (January/February) to to mid-fall (October/November). This will have not only a profound negative impact on student performance, it will disproportionately penalize certain parts of the school population.
Let me give you two examples. We are a Special Education school. Each year we welcome a large percentage of new students, especially in Grade 4, who have transferred in to receive some intensive intervention to help to address their learning challenges. Most of the first term is spent lowering anxiety, building up self-confidence, and getting them to see school as a welcoming place and themselves as capable learners. Once we have unlocked those doors, the learning can begin. The imposition of a "high stakes" standardized test in the middle of that process would not only undermine what we are trying to do, but it would also give a "false negative" of the students' actual abilities. By mid-winter, on the other hand, they are much more positive and capable learners and are ready to take on the challenge. The data that an October/November test would give us would be less than useless.
Second example: The research is quite clear. All students lose a significant amount of their learning gains from the past year over the summer. Even in those families where there are deliberate attempts to maintain performance levels, there is still a drop off. It is estimated that for an average student it will take 6-8 weeks to recover what they lost in the summer and to return to May/June performance levels. In other words, they just get back to the end of Grade 3 starting blocks when they are handed a Grade 4 test! In addition, a recent study out of MIT has shown that students from a lower socio-economic strata (or "the poor" as MIT calls them) suffer far greater losses over the summer and take that much longer to bounce back, pushing them, in this case, well past the testing date.
In other words, at best, we will have a new FSA timeline that will clearly disadvantage two groups: students from less affluent families, and students with learning challenges. So who benefits? When I asked the test developers why they had changed the dates, I got a simple and direct answer. The reason was to take the pressure off of Grade 4 and 7 teachers so that they would not be held accountable for the results. In other words, we are adjusting our testing to meet the needs of the teachers not their students. This may have been a glib response, or it may be reflective of the perceived society pressures cited above. Whatever the reason, the timing is not good for kids.
This is something that needs to be rethought.