The issue being addressed was the potential misuse of school-based data to rate schools or teachers based upon student performance. While we haven't embraced the over-dependence upon high stakes testing that is so prevalent in the United States, there are still organizations such as the Fraser Institute that use FSA (and soon high school literacy and numeracy results) as part of their "report card" process of ranking schools. As any parent, or educator can tell you, performance cannot be measured by a raw score, the real test is not a snapshot, but an examination of individual growth. A student or select school population that has had a long track record of high achievement will perform at the top of the pack almost regardless of the quality of instruction. However, a student who has been struggling and manages to raise their scores to be at or close to grade level is an excellent measure of the impact of their individual effort and school and parental support.
Having said that, objective, province-wide measures such as the FSAs are are very important to schools in providing a benchmark to measure the accuracy of their own internal assessments. For us, this type of testing (along with other standardized measures such as DRA and DIBELS), gives us a window on the effectiveness of our practice and the accuracy of our performance evaluations. I am hoping that the BCSTA recognizes that fact and that their statement that the data should only be shared with parents assumes that it will naturally be shared with individual schools as well in order to inform their practice. To leave teachers and schools out of this equation, ultimately makes the whole test process quite pointless and squanders a valuable tool for programme and school improvement.
Whenever periodic dip-sticking of student performance is implemented by government, there is always pushback from teachers' unions. They often argue (as has been the case in both Ontario and BC) that grade specific testing puts inordinate pressure on the teachers of those classes being assessed and runs the risk of being used as a teacher-performance measure for evaluative purposes. Needless to say, this is hardly an objective standard to rate a teacher given the range of abilities that they face in their classroom, but it continues to be a strong enough belief among educators that experienced teachers often avoid teaching those grade levels and like-minded parents are actively encouraged to refuse to allow their children to take the tests.
Earlier this month, I had the opportunity to attend a briefing session at the Ministry of Education in Victoria. My role was to represent the Special Education schools across the Province and, given that a major focus of the meeting was going to be around the proposed revisions of the Foundation Student Assessments (FSAs) and the design for the new high school literacy and numeracy proficiency exams, I was glad to be taking part in the discussions.
After a couple of hours of back and forth dialogue on a wide range of issues surrounding the new curriculum, it was finally time for the assessment piece. The assessment team outlined the process by which the new literacy and numeracy tests were to be phased in and spoke about their assumption that high school students might write the tests multiple times in order to secure the highest mark possible. Setting aside the questionable thesis that teenagers would voluntarily take the test over and over, it was clear that the purpose of this new testing would ultimately be to provide a footnote for their transcripts on their highest level of achievement on the test, as information for post-secondary institutions or employers. Unlike the previous high school examinations which were eliminated last year, the onus for performance would be solely on the student and the school would simply manage the process. The teacher had been taken out of the equation.
Compounding this approach was a more problematic change in the FSA administration. The proposal was that the date of the testing would be moved from mid-year (February/March) to the fall (late October). Clearly, for students with a learning disability, moving the test ahead to such an early date in the school year would work strongly against us getting an accurate measure of performance. Rather than giving six months of direct instruction and support a chance to take effect, we were being given six weeks!
When I asked for the reason, the Ministry team (all seconded teachers) indicated that this would take the pressure off of Grade 4 and 7 teachers because it "was more about how they did last year rather than how they were doing this one." To the casual observer it might seem a moot point as to when students were being tested but in actual fact, going forward, we will get far less useful data for improving our own teaching and receive instead a set of numbers that are basically meaningless.
There is only one reason for collecting performance data on students. It is to inform our pedagogical practice so that we can be more effective in supporting our students. Diluting the data for political reasons does everyone a disservice.